2026-01-03 00:00:06.464717 | Job console starting 2026-01-03 00:00:06.494681 | Updating git repos 2026-01-03 00:00:06.639976 | Cloning repos into workspace 2026-01-03 00:00:06.979597 | Restoring repo states 2026-01-03 00:00:07.008774 | Merging changes 2026-01-03 00:00:07.008807 | Checking out repos 2026-01-03 00:00:07.398174 | Preparing playbooks 2026-01-03 00:00:08.778012 | Running Ansible setup 2026-01-03 00:00:17.330349 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-01-03 00:00:20.891806 | 2026-01-03 00:00:20.891991 | PLAY [Base pre] 2026-01-03 00:00:21.029851 | 2026-01-03 00:00:21.030032 | TASK [Setup log path fact] 2026-01-03 00:00:21.109906 | orchestrator | ok 2026-01-03 00:00:21.233255 | 2026-01-03 00:00:21.233454 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-03 00:00:21.376326 | orchestrator | ok 2026-01-03 00:00:21.455863 | 2026-01-03 00:00:21.456019 | TASK [emit-job-header : Print job information] 2026-01-03 00:00:21.768978 | # Job Information 2026-01-03 00:00:21.769209 | Ansible Version: 2.16.14 2026-01-03 00:00:21.769250 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-01-03 00:00:21.769287 | Pipeline: periodic-midnight 2026-01-03 00:00:21.769310 | Executor: 521e9411259a 2026-01-03 00:00:21.769331 | Triggered by: https://github.com/osism/testbed 2026-01-03 00:00:21.769353 | Event ID: 088a682cd46040ec8feff3feacdb5d3f 2026-01-03 00:00:21.787827 | 2026-01-03 00:00:21.788562 | LOOP [emit-job-header : Print node information] 2026-01-03 00:00:22.941910 | orchestrator | ok: 2026-01-03 00:00:22.942111 | orchestrator | # Node Information 2026-01-03 00:00:22.942147 | orchestrator | Inventory Hostname: orchestrator 2026-01-03 00:00:22.942187 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-01-03 00:00:22.942210 | orchestrator | Username: zuul-testbed01 2026-01-03 00:00:22.942232 | orchestrator | Distro: Debian 12.12 2026-01-03 00:00:22.942256 | orchestrator | Provider: static-testbed 2026-01-03 00:00:22.942278 | orchestrator | Region: 2026-01-03 00:00:22.942299 | orchestrator | Label: testbed-orchestrator 2026-01-03 00:00:22.942319 | orchestrator | Product Name: OpenStack Nova 2026-01-03 00:00:22.942338 | orchestrator | Interface IP: 81.163.193.140 2026-01-03 00:00:22.966024 | 2026-01-03 00:00:22.966195 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-01-03 00:00:25.566668 | orchestrator -> localhost | changed 2026-01-03 00:00:25.583306 | 2026-01-03 00:00:25.583471 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-01-03 00:00:31.241649 | orchestrator -> localhost | changed 2026-01-03 00:00:31.311888 | 2026-01-03 00:00:31.312052 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-01-03 00:00:32.785012 | orchestrator -> localhost | ok 2026-01-03 00:00:32.792839 | 2026-01-03 00:00:32.792980 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-01-03 00:00:32.898448 | orchestrator | ok 2026-01-03 00:00:33.014597 | orchestrator | included: /var/lib/zuul/builds/c5d31f13fb3e48e7b669edbaeaa9591b/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-01-03 00:00:33.060768 | 2026-01-03 00:00:33.075369 | TASK [add-build-sshkey : Create Temp SSH key] 2026-01-03 00:00:41.911698 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-01-03 00:00:41.911941 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/c5d31f13fb3e48e7b669edbaeaa9591b/work/c5d31f13fb3e48e7b669edbaeaa9591b_id_rsa 2026-01-03 00:00:41.911976 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/c5d31f13fb3e48e7b669edbaeaa9591b/work/c5d31f13fb3e48e7b669edbaeaa9591b_id_rsa.pub 2026-01-03 00:00:41.911998 | orchestrator -> localhost | The key fingerprint is: 2026-01-03 00:00:41.912017 | orchestrator -> localhost | SHA256:8z6HR9FXOOAjJqjbZx1WybtXWKSeWx0BGvd1eU9MyJ8 zuul-build-sshkey 2026-01-03 00:00:41.912074 | orchestrator -> localhost | The key's randomart image is: 2026-01-03 00:00:41.912101 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-01-03 00:00:41.912119 | orchestrator -> localhost | | ..=.**| 2026-01-03 00:00:41.912137 | orchestrator -> localhost | | . o+.*+O| 2026-01-03 00:00:41.912180 | orchestrator -> localhost | | . . o.*..**| 2026-01-03 00:00:41.912199 | orchestrator -> localhost | | . o o.+.E*| 2026-01-03 00:00:41.912216 | orchestrator -> localhost | | . S o ..+.+| 2026-01-03 00:00:41.912237 | orchestrator -> localhost | | o = ... + | 2026-01-03 00:00:41.912254 | orchestrator -> localhost | | . . o oo. o | 2026-01-03 00:00:41.912271 | orchestrator -> localhost | | o .o o. | 2026-01-03 00:00:41.912287 | orchestrator -> localhost | | .+ | 2026-01-03 00:00:41.912304 | orchestrator -> localhost | +----[SHA256]-----+ 2026-01-03 00:00:41.912416 | orchestrator -> localhost | ok: Runtime: 0:00:05.676953 2026-01-03 00:00:41.922651 | 2026-01-03 00:00:41.922748 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-01-03 00:00:42.038739 | orchestrator | ok 2026-01-03 00:00:42.070510 | orchestrator | included: /var/lib/zuul/builds/c5d31f13fb3e48e7b669edbaeaa9591b/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-01-03 00:00:42.131934 | 2026-01-03 00:00:42.132039 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-01-03 00:00:42.184113 | orchestrator | skipping: Conditional result was False 2026-01-03 00:00:42.212506 | 2026-01-03 00:00:42.217330 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-01-03 00:00:43.789387 | orchestrator | changed 2026-01-03 00:00:43.799458 | 2026-01-03 00:00:43.799555 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-01-03 00:00:44.166104 | orchestrator | ok 2026-01-03 00:00:44.172670 | 2026-01-03 00:00:44.172756 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-01-03 00:00:44.762458 | orchestrator | ok 2026-01-03 00:00:44.790599 | 2026-01-03 00:00:44.790701 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-01-03 00:00:45.586993 | orchestrator | ok 2026-01-03 00:00:45.597114 | 2026-01-03 00:00:45.597226 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-01-03 00:00:45.640541 | orchestrator | skipping: Conditional result was False 2026-01-03 00:00:45.647572 | 2026-01-03 00:00:45.647682 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-01-03 00:00:47.424805 | orchestrator -> localhost | changed 2026-01-03 00:00:47.478591 | 2026-01-03 00:00:47.478708 | TASK [add-build-sshkey : Add back temp key] 2026-01-03 00:00:49.060759 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/c5d31f13fb3e48e7b669edbaeaa9591b/work/c5d31f13fb3e48e7b669edbaeaa9591b_id_rsa (zuul-build-sshkey) 2026-01-03 00:00:49.060955 | orchestrator -> localhost | ok: Runtime: 0:00:00.027879 2026-01-03 00:00:49.068234 | 2026-01-03 00:00:49.068332 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-01-03 00:00:50.317513 | orchestrator | ok 2026-01-03 00:00:50.347019 | 2026-01-03 00:00:50.347144 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-01-03 00:00:50.390121 | orchestrator | skipping: Conditional result was False 2026-01-03 00:00:50.517229 | 2026-01-03 00:00:50.517332 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-01-03 00:00:51.490059 | orchestrator | ok 2026-01-03 00:00:51.515367 | 2026-01-03 00:00:51.515472 | TASK [validate-host : Define zuul_info_dir fact] 2026-01-03 00:00:51.553716 | orchestrator | ok 2026-01-03 00:00:51.559856 | 2026-01-03 00:00:51.559953 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-01-03 00:00:52.693993 | orchestrator -> localhost | ok 2026-01-03 00:00:52.699907 | 2026-01-03 00:00:52.699994 | TASK [validate-host : Collect information about the host] 2026-01-03 00:00:54.425453 | orchestrator | ok 2026-01-03 00:00:54.448605 | 2026-01-03 00:00:54.448716 | TASK [validate-host : Sanitize hostname] 2026-01-03 00:00:54.624298 | orchestrator | ok 2026-01-03 00:00:54.629076 | 2026-01-03 00:00:54.629175 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-01-03 00:00:57.491356 | orchestrator -> localhost | changed 2026-01-03 00:00:57.496860 | 2026-01-03 00:00:57.496981 | TASK [validate-host : Collect information about zuul worker] 2026-01-03 00:00:58.679251 | orchestrator | ok 2026-01-03 00:00:58.684120 | 2026-01-03 00:00:58.684231 | TASK [validate-host : Write out all zuul information for each host] 2026-01-03 00:01:00.561571 | orchestrator -> localhost | changed 2026-01-03 00:01:00.570503 | 2026-01-03 00:01:00.570589 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-01-03 00:01:00.959046 | orchestrator | ok 2026-01-03 00:01:00.964188 | 2026-01-03 00:01:00.964279 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-01-03 00:02:24.639524 | orchestrator | changed: 2026-01-03 00:02:24.639766 | orchestrator | .d..t...... src/ 2026-01-03 00:02:24.639802 | orchestrator | .d..t...... src/github.com/ 2026-01-03 00:02:24.639829 | orchestrator | .d..t...... src/github.com/osism/ 2026-01-03 00:02:24.639851 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-01-03 00:02:24.639871 | orchestrator | RedHat.yml 2026-01-03 00:02:24.655890 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-01-03 00:02:24.655913 | orchestrator | RedHat.yml 2026-01-03 00:02:24.655975 | orchestrator | = 1.53.0"... 2026-01-03 00:02:36.701853 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-01-03 00:02:36.833589 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-01-03 00:02:37.404785 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-01-03 00:02:37.512906 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-01-03 00:02:38.274398 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-01-03 00:02:38.333831 | orchestrator | - Installing hashicorp/local v2.6.1... 2026-01-03 00:02:39.110906 | orchestrator | - Installed hashicorp/local v2.6.1 (signed, key ID 0C0AF313E5FD9F80) 2026-01-03 00:02:39.111006 | orchestrator | 2026-01-03 00:02:39.111013 | orchestrator | Providers are signed by their developers. 2026-01-03 00:02:39.111018 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-01-03 00:02:39.111030 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-01-03 00:02:39.111072 | orchestrator | 2026-01-03 00:02:39.111078 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-01-03 00:02:39.111083 | orchestrator | selections it made above. Include this file in your version control repository 2026-01-03 00:02:39.111097 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-01-03 00:02:39.111108 | orchestrator | you run "tofu init" in the future. 2026-01-03 00:02:39.111580 | orchestrator | 2026-01-03 00:02:39.111623 | orchestrator | OpenTofu has been successfully initialized! 2026-01-03 00:02:39.111657 | orchestrator | 2026-01-03 00:02:39.111663 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-01-03 00:02:39.111667 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-01-03 00:02:39.111672 | orchestrator | should now work. 2026-01-03 00:02:39.111676 | orchestrator | 2026-01-03 00:02:39.111680 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-01-03 00:02:39.111684 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-01-03 00:02:39.111696 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-01-03 00:02:39.357114 | orchestrator | Created and switched to workspace "ci"! 2026-01-03 00:02:39.357196 | orchestrator | 2026-01-03 00:02:39.357203 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-01-03 00:02:39.357210 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-01-03 00:02:39.357214 | orchestrator | for this configuration. 2026-01-03 00:02:39.582201 | orchestrator | ci.auto.tfvars 2026-01-03 00:02:39.704690 | orchestrator | default_custom.tf 2026-01-03 00:02:40.805911 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-01-03 00:02:41.430417 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-01-03 00:02:41.771946 | orchestrator | 2026-01-03 00:02:41.772012 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-01-03 00:02:41.772020 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-01-03 00:02:41.772047 | orchestrator | + create 2026-01-03 00:02:41.772063 | orchestrator | <= read (data resources) 2026-01-03 00:02:41.772076 | orchestrator | 2026-01-03 00:02:41.772080 | orchestrator | OpenTofu will perform the following actions: 2026-01-03 00:02:41.772192 | orchestrator | 2026-01-03 00:02:41.772205 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-01-03 00:02:41.772211 | orchestrator | # (config refers to values not yet known) 2026-01-03 00:02:41.772215 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-01-03 00:02:41.772219 | orchestrator | + checksum = (known after apply) 2026-01-03 00:02:41.772224 | orchestrator | + created_at = (known after apply) 2026-01-03 00:02:41.772228 | orchestrator | + file = (known after apply) 2026-01-03 00:02:41.772232 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.772252 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.772256 | orchestrator | + min_disk_gb = (known after apply) 2026-01-03 00:02:41.772260 | orchestrator | + min_ram_mb = (known after apply) 2026-01-03 00:02:41.772264 | orchestrator | + most_recent = true 2026-01-03 00:02:41.772268 | orchestrator | + name = (known after apply) 2026-01-03 00:02:41.772272 | orchestrator | + protected = (known after apply) 2026-01-03 00:02:41.772276 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.772283 | orchestrator | + schema = (known after apply) 2026-01-03 00:02:41.772287 | orchestrator | + size_bytes = (known after apply) 2026-01-03 00:02:41.772291 | orchestrator | + tags = (known after apply) 2026-01-03 00:02:41.772295 | orchestrator | + updated_at = (known after apply) 2026-01-03 00:02:41.772299 | orchestrator | } 2026-01-03 00:02:41.772381 | orchestrator | 2026-01-03 00:02:41.772393 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-01-03 00:02:41.772398 | orchestrator | # (config refers to values not yet known) 2026-01-03 00:02:41.772402 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-01-03 00:02:41.772406 | orchestrator | + checksum = (known after apply) 2026-01-03 00:02:41.772410 | orchestrator | + created_at = (known after apply) 2026-01-03 00:02:41.772414 | orchestrator | + file = (known after apply) 2026-01-03 00:02:41.772418 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.772422 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.772425 | orchestrator | + min_disk_gb = (known after apply) 2026-01-03 00:02:41.772429 | orchestrator | + min_ram_mb = (known after apply) 2026-01-03 00:02:41.772433 | orchestrator | + most_recent = true 2026-01-03 00:02:41.772437 | orchestrator | + name = (known after apply) 2026-01-03 00:02:41.772441 | orchestrator | + protected = (known after apply) 2026-01-03 00:02:41.772445 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.772449 | orchestrator | + schema = (known after apply) 2026-01-03 00:02:41.772453 | orchestrator | + size_bytes = (known after apply) 2026-01-03 00:02:41.772457 | orchestrator | + tags = (known after apply) 2026-01-03 00:02:41.772461 | orchestrator | + updated_at = (known after apply) 2026-01-03 00:02:41.772465 | orchestrator | } 2026-01-03 00:02:41.772544 | orchestrator | 2026-01-03 00:02:41.772557 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-01-03 00:02:41.772562 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-01-03 00:02:41.772566 | orchestrator | + content = (known after apply) 2026-01-03 00:02:41.772571 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-03 00:02:41.772575 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-03 00:02:41.772578 | orchestrator | + content_md5 = (known after apply) 2026-01-03 00:02:41.772582 | orchestrator | + content_sha1 = (known after apply) 2026-01-03 00:02:41.772586 | orchestrator | + content_sha256 = (known after apply) 2026-01-03 00:02:41.772590 | orchestrator | + content_sha512 = (known after apply) 2026-01-03 00:02:41.772594 | orchestrator | + directory_permission = "0777" 2026-01-03 00:02:41.772598 | orchestrator | + file_permission = "0644" 2026-01-03 00:02:41.772601 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-01-03 00:02:41.772605 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.772609 | orchestrator | } 2026-01-03 00:02:41.772678 | orchestrator | 2026-01-03 00:02:41.772689 | orchestrator | # local_file.id_rsa_pub will be created 2026-01-03 00:02:41.772694 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-01-03 00:02:41.772698 | orchestrator | + content = (known after apply) 2026-01-03 00:02:41.772702 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-03 00:02:41.772706 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-03 00:02:41.772710 | orchestrator | + content_md5 = (known after apply) 2026-01-03 00:02:41.772713 | orchestrator | + content_sha1 = (known after apply) 2026-01-03 00:02:41.772717 | orchestrator | + content_sha256 = (known after apply) 2026-01-03 00:02:41.772721 | orchestrator | + content_sha512 = (known after apply) 2026-01-03 00:02:41.772725 | orchestrator | + directory_permission = "0777" 2026-01-03 00:02:41.772729 | orchestrator | + file_permission = "0644" 2026-01-03 00:02:41.772741 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-01-03 00:02:41.772745 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.772749 | orchestrator | } 2026-01-03 00:02:41.772816 | orchestrator | 2026-01-03 00:02:41.772832 | orchestrator | # local_file.inventory will be created 2026-01-03 00:02:41.772837 | orchestrator | + resource "local_file" "inventory" { 2026-01-03 00:02:41.772841 | orchestrator | + content = (known after apply) 2026-01-03 00:02:41.772845 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-03 00:02:41.772848 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-03 00:02:41.772852 | orchestrator | + content_md5 = (known after apply) 2026-01-03 00:02:41.772856 | orchestrator | + content_sha1 = (known after apply) 2026-01-03 00:02:41.772860 | orchestrator | + content_sha256 = (known after apply) 2026-01-03 00:02:41.772864 | orchestrator | + content_sha512 = (known after apply) 2026-01-03 00:02:41.772868 | orchestrator | + directory_permission = "0777" 2026-01-03 00:02:41.772888 | orchestrator | + file_permission = "0644" 2026-01-03 00:02:41.772895 | orchestrator | + filename = "inventory.ci" 2026-01-03 00:02:41.772901 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.772907 | orchestrator | } 2026-01-03 00:02:41.772991 | orchestrator | 2026-01-03 00:02:41.773004 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-01-03 00:02:41.773008 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-01-03 00:02:41.773012 | orchestrator | + content = (sensitive value) 2026-01-03 00:02:41.773016 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-03 00:02:41.773020 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-03 00:02:41.773023 | orchestrator | + content_md5 = (known after apply) 2026-01-03 00:02:41.773027 | orchestrator | + content_sha1 = (known after apply) 2026-01-03 00:02:41.773031 | orchestrator | + content_sha256 = (known after apply) 2026-01-03 00:02:41.773035 | orchestrator | + content_sha512 = (known after apply) 2026-01-03 00:02:41.773039 | orchestrator | + directory_permission = "0700" 2026-01-03 00:02:41.773043 | orchestrator | + file_permission = "0600" 2026-01-03 00:02:41.773046 | orchestrator | + filename = ".id_rsa.ci" 2026-01-03 00:02:41.773050 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.773054 | orchestrator | } 2026-01-03 00:02:41.773075 | orchestrator | 2026-01-03 00:02:41.773086 | orchestrator | # null_resource.node_semaphore will be created 2026-01-03 00:02:41.773090 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-01-03 00:02:41.773094 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.773098 | orchestrator | } 2026-01-03 00:02:41.773165 | orchestrator | 2026-01-03 00:02:41.773176 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-01-03 00:02:41.773181 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-01-03 00:02:41.773185 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.773189 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.773193 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.773197 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.773201 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.773205 | orchestrator | + name = "testbed-volume-manager-base" 2026-01-03 00:02:41.773209 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.773212 | orchestrator | + size = 80 2026-01-03 00:02:41.773216 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.773220 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.773224 | orchestrator | } 2026-01-03 00:02:41.773286 | orchestrator | 2026-01-03 00:02:41.773298 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-01-03 00:02:41.773303 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-03 00:02:41.773307 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.773310 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.773314 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.773323 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.773327 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.773331 | orchestrator | + name = "testbed-volume-0-node-base" 2026-01-03 00:02:41.773335 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.773339 | orchestrator | + size = 80 2026-01-03 00:02:41.773343 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.773346 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.773350 | orchestrator | } 2026-01-03 00:02:41.773410 | orchestrator | 2026-01-03 00:02:41.773422 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-01-03 00:02:41.773426 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-03 00:02:41.773430 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.773434 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.773437 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.773441 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.773445 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.773449 | orchestrator | + name = "testbed-volume-1-node-base" 2026-01-03 00:02:41.773453 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.773456 | orchestrator | + size = 80 2026-01-03 00:02:41.773460 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.773464 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.773468 | orchestrator | } 2026-01-03 00:02:41.773532 | orchestrator | 2026-01-03 00:02:41.773543 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-01-03 00:02:41.773547 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-03 00:02:41.773551 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.773555 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.773559 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.773563 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.773567 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.773570 | orchestrator | + name = "testbed-volume-2-node-base" 2026-01-03 00:02:41.773574 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.773578 | orchestrator | + size = 80 2026-01-03 00:02:41.773582 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.773586 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.773589 | orchestrator | } 2026-01-03 00:02:41.773650 | orchestrator | 2026-01-03 00:02:41.773661 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-01-03 00:02:41.773666 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-03 00:02:41.773669 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.773673 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.773677 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.773681 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.773685 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.773692 | orchestrator | + name = "testbed-volume-3-node-base" 2026-01-03 00:02:41.773696 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.773700 | orchestrator | + size = 80 2026-01-03 00:02:41.773703 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.773707 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.773711 | orchestrator | } 2026-01-03 00:02:41.773775 | orchestrator | 2026-01-03 00:02:41.773787 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-01-03 00:02:41.773792 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-03 00:02:41.773796 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.773800 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.773804 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.773814 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.773819 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.773825 | orchestrator | + name = "testbed-volume-4-node-base" 2026-01-03 00:02:41.773831 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.773837 | orchestrator | + size = 80 2026-01-03 00:02:41.773842 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.773848 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.773854 | orchestrator | } 2026-01-03 00:02:41.773951 | orchestrator | 2026-01-03 00:02:41.773965 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-01-03 00:02:41.773970 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-03 00:02:41.773974 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.773977 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.773981 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.773985 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.773989 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.773993 | orchestrator | + name = "testbed-volume-5-node-base" 2026-01-03 00:02:41.773997 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.774001 | orchestrator | + size = 80 2026-01-03 00:02:41.774005 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.774009 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.774143 | orchestrator | } 2026-01-03 00:02:41.774216 | orchestrator | 2026-01-03 00:02:41.774232 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-01-03 00:02:41.774237 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-03 00:02:41.774241 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.774245 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.774249 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.774253 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.774257 | orchestrator | + name = "testbed-volume-0-node-3" 2026-01-03 00:02:41.774262 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.774266 | orchestrator | + size = 20 2026-01-03 00:02:41.774270 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.774274 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.774278 | orchestrator | } 2026-01-03 00:02:41.774339 | orchestrator | 2026-01-03 00:02:41.774350 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-01-03 00:02:41.774354 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-03 00:02:41.774358 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.774362 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.774366 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.774370 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.774374 | orchestrator | + name = "testbed-volume-1-node-4" 2026-01-03 00:02:41.774378 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.774381 | orchestrator | + size = 20 2026-01-03 00:02:41.774385 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.774390 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.774394 | orchestrator | } 2026-01-03 00:02:41.774469 | orchestrator | 2026-01-03 00:02:41.774481 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-01-03 00:02:41.774487 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-03 00:02:41.774490 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.774494 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.774498 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.774502 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.774505 | orchestrator | + name = "testbed-volume-2-node-5" 2026-01-03 00:02:41.774509 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.774519 | orchestrator | + size = 20 2026-01-03 00:02:41.774523 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.774527 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.774531 | orchestrator | } 2026-01-03 00:02:41.774591 | orchestrator | 2026-01-03 00:02:41.774601 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-01-03 00:02:41.774606 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-03 00:02:41.774609 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.774613 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.774617 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.774621 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.774625 | orchestrator | + name = "testbed-volume-3-node-3" 2026-01-03 00:02:41.774628 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.774632 | orchestrator | + size = 20 2026-01-03 00:02:41.774636 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.774640 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.774644 | orchestrator | } 2026-01-03 00:02:41.774697 | orchestrator | 2026-01-03 00:02:41.774707 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-01-03 00:02:41.774712 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-03 00:02:41.774716 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.774719 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.774723 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.774727 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.774731 | orchestrator | + name = "testbed-volume-4-node-4" 2026-01-03 00:02:41.774735 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.774742 | orchestrator | + size = 20 2026-01-03 00:02:41.774746 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.774750 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.774753 | orchestrator | } 2026-01-03 00:02:41.774820 | orchestrator | 2026-01-03 00:02:41.774837 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-01-03 00:02:41.774841 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-03 00:02:41.774845 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.774849 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.774853 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.774857 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.774861 | orchestrator | + name = "testbed-volume-5-node-5" 2026-01-03 00:02:41.774865 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.774869 | orchestrator | + size = 20 2026-01-03 00:02:41.774892 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.774895 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.774899 | orchestrator | } 2026-01-03 00:02:41.774969 | orchestrator | 2026-01-03 00:02:41.774980 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-01-03 00:02:41.774985 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-03 00:02:41.774989 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.774992 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.774996 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.775000 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.775004 | orchestrator | + name = "testbed-volume-6-node-3" 2026-01-03 00:02:41.775008 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.775011 | orchestrator | + size = 20 2026-01-03 00:02:41.775015 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.775019 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.775023 | orchestrator | } 2026-01-03 00:02:41.775078 | orchestrator | 2026-01-03 00:02:41.775093 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-01-03 00:02:41.775098 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-03 00:02:41.775107 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.775112 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.775116 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.775120 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.775124 | orchestrator | + name = "testbed-volume-7-node-4" 2026-01-03 00:02:41.775128 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.775132 | orchestrator | + size = 20 2026-01-03 00:02:41.775136 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.775140 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.775144 | orchestrator | } 2026-01-03 00:02:41.775213 | orchestrator | 2026-01-03 00:02:41.775225 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-01-03 00:02:41.775229 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-03 00:02:41.775233 | orchestrator | + attachment = (known after apply) 2026-01-03 00:02:41.775237 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.775240 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.775244 | orchestrator | + metadata = (known after apply) 2026-01-03 00:02:41.775248 | orchestrator | + name = "testbed-volume-8-node-5" 2026-01-03 00:02:41.775252 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.775256 | orchestrator | + size = 20 2026-01-03 00:02:41.775260 | orchestrator | + volume_retype_policy = "never" 2026-01-03 00:02:41.775263 | orchestrator | + volume_type = "ssd" 2026-01-03 00:02:41.775267 | orchestrator | } 2026-01-03 00:02:41.775472 | orchestrator | 2026-01-03 00:02:41.775488 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-01-03 00:02:41.775493 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-01-03 00:02:41.775497 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-03 00:02:41.775501 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-03 00:02:41.775505 | orchestrator | + all_metadata = (known after apply) 2026-01-03 00:02:41.775509 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.775513 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.775516 | orchestrator | + config_drive = true 2026-01-03 00:02:41.775520 | orchestrator | + created = (known after apply) 2026-01-03 00:02:41.775524 | orchestrator | + flavor_id = (known after apply) 2026-01-03 00:02:41.775528 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-01-03 00:02:41.775532 | orchestrator | + force_delete = false 2026-01-03 00:02:41.775536 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-03 00:02:41.775540 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.775544 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.775548 | orchestrator | + image_name = (known after apply) 2026-01-03 00:02:41.775552 | orchestrator | + key_pair = "testbed" 2026-01-03 00:02:41.775556 | orchestrator | + name = "testbed-manager" 2026-01-03 00:02:41.775559 | orchestrator | + power_state = "active" 2026-01-03 00:02:41.775563 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.775567 | orchestrator | + security_groups = (known after apply) 2026-01-03 00:02:41.775571 | orchestrator | + stop_before_destroy = false 2026-01-03 00:02:41.775575 | orchestrator | + updated = (known after apply) 2026-01-03 00:02:41.775578 | orchestrator | + user_data = (sensitive value) 2026-01-03 00:02:41.775582 | orchestrator | 2026-01-03 00:02:41.775586 | orchestrator | + block_device { 2026-01-03 00:02:41.775590 | orchestrator | + boot_index = 0 2026-01-03 00:02:41.775594 | orchestrator | + delete_on_termination = false 2026-01-03 00:02:41.775601 | orchestrator | + destination_type = "volume" 2026-01-03 00:02:41.775605 | orchestrator | + multiattach = false 2026-01-03 00:02:41.775609 | orchestrator | + source_type = "volume" 2026-01-03 00:02:41.775613 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.775621 | orchestrator | } 2026-01-03 00:02:41.775625 | orchestrator | 2026-01-03 00:02:41.775629 | orchestrator | + network { 2026-01-03 00:02:41.775633 | orchestrator | + access_network = false 2026-01-03 00:02:41.775637 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-03 00:02:41.775641 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-03 00:02:41.775645 | orchestrator | + mac = (known after apply) 2026-01-03 00:02:41.775649 | orchestrator | + name = (known after apply) 2026-01-03 00:02:41.775653 | orchestrator | + port = (known after apply) 2026-01-03 00:02:41.775658 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.775662 | orchestrator | } 2026-01-03 00:02:41.775665 | orchestrator | } 2026-01-03 00:02:41.775864 | orchestrator | 2026-01-03 00:02:41.775902 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-01-03 00:02:41.775908 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-03 00:02:41.775912 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-03 00:02:41.775915 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-03 00:02:41.775919 | orchestrator | + all_metadata = (known after apply) 2026-01-03 00:02:41.775923 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.775927 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.775930 | orchestrator | + config_drive = true 2026-01-03 00:02:41.775934 | orchestrator | + created = (known after apply) 2026-01-03 00:02:41.775938 | orchestrator | + flavor_id = (known after apply) 2026-01-03 00:02:41.775942 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-03 00:02:41.775945 | orchestrator | + force_delete = false 2026-01-03 00:02:41.775949 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-03 00:02:41.775953 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.775957 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.775960 | orchestrator | + image_name = (known after apply) 2026-01-03 00:02:41.775964 | orchestrator | + key_pair = "testbed" 2026-01-03 00:02:41.775968 | orchestrator | + name = "testbed-node-0" 2026-01-03 00:02:41.775972 | orchestrator | + power_state = "active" 2026-01-03 00:02:41.775975 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.775979 | orchestrator | + security_groups = (known after apply) 2026-01-03 00:02:41.775983 | orchestrator | + stop_before_destroy = false 2026-01-03 00:02:41.775986 | orchestrator | + updated = (known after apply) 2026-01-03 00:02:41.775990 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-03 00:02:41.775994 | orchestrator | 2026-01-03 00:02:41.775998 | orchestrator | + block_device { 2026-01-03 00:02:41.776002 | orchestrator | + boot_index = 0 2026-01-03 00:02:41.776005 | orchestrator | + delete_on_termination = false 2026-01-03 00:02:41.776009 | orchestrator | + destination_type = "volume" 2026-01-03 00:02:41.776013 | orchestrator | + multiattach = false 2026-01-03 00:02:41.776016 | orchestrator | + source_type = "volume" 2026-01-03 00:02:41.776020 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.776024 | orchestrator | } 2026-01-03 00:02:41.776028 | orchestrator | 2026-01-03 00:02:41.776032 | orchestrator | + network { 2026-01-03 00:02:41.776035 | orchestrator | + access_network = false 2026-01-03 00:02:41.776039 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-03 00:02:41.776043 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-03 00:02:41.776047 | orchestrator | + mac = (known after apply) 2026-01-03 00:02:41.776051 | orchestrator | + name = (known after apply) 2026-01-03 00:02:41.776055 | orchestrator | + port = (known after apply) 2026-01-03 00:02:41.776058 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.776062 | orchestrator | } 2026-01-03 00:02:41.776066 | orchestrator | } 2026-01-03 00:02:41.776262 | orchestrator | 2026-01-03 00:02:41.776275 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-01-03 00:02:41.776279 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-03 00:02:41.776283 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-03 00:02:41.776291 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-03 00:02:41.776295 | orchestrator | + all_metadata = (known after apply) 2026-01-03 00:02:41.776299 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.776303 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.776306 | orchestrator | + config_drive = true 2026-01-03 00:02:41.776310 | orchestrator | + created = (known after apply) 2026-01-03 00:02:41.776314 | orchestrator | + flavor_id = (known after apply) 2026-01-03 00:02:41.776318 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-03 00:02:41.776322 | orchestrator | + force_delete = false 2026-01-03 00:02:41.776325 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-03 00:02:41.776329 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.776333 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.776337 | orchestrator | + image_name = (known after apply) 2026-01-03 00:02:41.776340 | orchestrator | + key_pair = "testbed" 2026-01-03 00:02:41.776344 | orchestrator | + name = "testbed-node-1" 2026-01-03 00:02:41.776348 | orchestrator | + power_state = "active" 2026-01-03 00:02:41.776352 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.776356 | orchestrator | + security_groups = (known after apply) 2026-01-03 00:02:41.776359 | orchestrator | + stop_before_destroy = false 2026-01-03 00:02:41.776363 | orchestrator | + updated = (known after apply) 2026-01-03 00:02:41.776367 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-03 00:02:41.776371 | orchestrator | 2026-01-03 00:02:41.776375 | orchestrator | + block_device { 2026-01-03 00:02:41.776378 | orchestrator | + boot_index = 0 2026-01-03 00:02:41.776382 | orchestrator | + delete_on_termination = false 2026-01-03 00:02:41.776386 | orchestrator | + destination_type = "volume" 2026-01-03 00:02:41.776389 | orchestrator | + multiattach = false 2026-01-03 00:02:41.776393 | orchestrator | + source_type = "volume" 2026-01-03 00:02:41.776397 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.776401 | orchestrator | } 2026-01-03 00:02:41.776405 | orchestrator | 2026-01-03 00:02:41.776408 | orchestrator | + network { 2026-01-03 00:02:41.776412 | orchestrator | + access_network = false 2026-01-03 00:02:41.776416 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-03 00:02:41.776420 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-03 00:02:41.776423 | orchestrator | + mac = (known after apply) 2026-01-03 00:02:41.776427 | orchestrator | + name = (known after apply) 2026-01-03 00:02:41.776431 | orchestrator | + port = (known after apply) 2026-01-03 00:02:41.776435 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.776439 | orchestrator | } 2026-01-03 00:02:41.776443 | orchestrator | } 2026-01-03 00:02:41.776633 | orchestrator | 2026-01-03 00:02:41.776645 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-01-03 00:02:41.776650 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-03 00:02:41.776654 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-03 00:02:41.776657 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-03 00:02:41.776662 | orchestrator | + all_metadata = (known after apply) 2026-01-03 00:02:41.776666 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.776673 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.776677 | orchestrator | + config_drive = true 2026-01-03 00:02:41.776680 | orchestrator | + created = (known after apply) 2026-01-03 00:02:41.776684 | orchestrator | + flavor_id = (known after apply) 2026-01-03 00:02:41.776688 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-03 00:02:41.776692 | orchestrator | + force_delete = false 2026-01-03 00:02:41.776695 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-03 00:02:41.776699 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.776703 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.776710 | orchestrator | + image_name = (known after apply) 2026-01-03 00:02:41.776714 | orchestrator | + key_pair = "testbed" 2026-01-03 00:02:41.776718 | orchestrator | + name = "testbed-node-2" 2026-01-03 00:02:41.776722 | orchestrator | + power_state = "active" 2026-01-03 00:02:41.776726 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.776729 | orchestrator | + security_groups = (known after apply) 2026-01-03 00:02:41.776733 | orchestrator | + stop_before_destroy = false 2026-01-03 00:02:41.776737 | orchestrator | + updated = (known after apply) 2026-01-03 00:02:41.776741 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-03 00:02:41.776744 | orchestrator | 2026-01-03 00:02:41.776748 | orchestrator | + block_device { 2026-01-03 00:02:41.776752 | orchestrator | + boot_index = 0 2026-01-03 00:02:41.776756 | orchestrator | + delete_on_termination = false 2026-01-03 00:02:41.776760 | orchestrator | + destination_type = "volume" 2026-01-03 00:02:41.776763 | orchestrator | + multiattach = false 2026-01-03 00:02:41.776767 | orchestrator | + source_type = "volume" 2026-01-03 00:02:41.776771 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.776775 | orchestrator | } 2026-01-03 00:02:41.776778 | orchestrator | 2026-01-03 00:02:41.776782 | orchestrator | + network { 2026-01-03 00:02:41.776786 | orchestrator | + access_network = false 2026-01-03 00:02:41.776790 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-03 00:02:41.776794 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-03 00:02:41.776797 | orchestrator | + mac = (known after apply) 2026-01-03 00:02:41.776801 | orchestrator | + name = (known after apply) 2026-01-03 00:02:41.776805 | orchestrator | + port = (known after apply) 2026-01-03 00:02:41.776809 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.776812 | orchestrator | } 2026-01-03 00:02:41.776816 | orchestrator | } 2026-01-03 00:02:41.777083 | orchestrator | 2026-01-03 00:02:41.777100 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-01-03 00:02:41.777104 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-03 00:02:41.777108 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-03 00:02:41.777112 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-03 00:02:41.777116 | orchestrator | + all_metadata = (known after apply) 2026-01-03 00:02:41.777120 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.777124 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.777127 | orchestrator | + config_drive = true 2026-01-03 00:02:41.777131 | orchestrator | + created = (known after apply) 2026-01-03 00:02:41.777135 | orchestrator | + flavor_id = (known after apply) 2026-01-03 00:02:41.777139 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-03 00:02:41.777142 | orchestrator | + force_delete = false 2026-01-03 00:02:41.777146 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-03 00:02:41.777150 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.777154 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.777158 | orchestrator | + image_name = (known after apply) 2026-01-03 00:02:41.777162 | orchestrator | + key_pair = "testbed" 2026-01-03 00:02:41.777166 | orchestrator | + name = "testbed-node-3" 2026-01-03 00:02:41.777170 | orchestrator | + power_state = "active" 2026-01-03 00:02:41.777173 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.777177 | orchestrator | + security_groups = (known after apply) 2026-01-03 00:02:41.777181 | orchestrator | + stop_before_destroy = false 2026-01-03 00:02:41.777185 | orchestrator | + updated = (known after apply) 2026-01-03 00:02:41.777189 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-03 00:02:41.777193 | orchestrator | 2026-01-03 00:02:41.777197 | orchestrator | + block_device { 2026-01-03 00:02:41.777204 | orchestrator | + boot_index = 0 2026-01-03 00:02:41.777208 | orchestrator | + delete_on_termination = false 2026-01-03 00:02:41.777212 | orchestrator | + destination_type = "volume" 2026-01-03 00:02:41.777220 | orchestrator | + multiattach = false 2026-01-03 00:02:41.777224 | orchestrator | + source_type = "volume" 2026-01-03 00:02:41.777228 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.777231 | orchestrator | } 2026-01-03 00:02:41.777235 | orchestrator | 2026-01-03 00:02:41.777239 | orchestrator | + network { 2026-01-03 00:02:41.777243 | orchestrator | + access_network = false 2026-01-03 00:02:41.777247 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-03 00:02:41.777251 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-03 00:02:41.777254 | orchestrator | + mac = (known after apply) 2026-01-03 00:02:41.777258 | orchestrator | + name = (known after apply) 2026-01-03 00:02:41.777262 | orchestrator | + port = (known after apply) 2026-01-03 00:02:41.777266 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.777270 | orchestrator | } 2026-01-03 00:02:41.777273 | orchestrator | } 2026-01-03 00:02:41.777470 | orchestrator | 2026-01-03 00:02:41.777482 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-01-03 00:02:41.777486 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-03 00:02:41.777490 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-03 00:02:41.777494 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-03 00:02:41.777498 | orchestrator | + all_metadata = (known after apply) 2026-01-03 00:02:41.777502 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.777505 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.777509 | orchestrator | + config_drive = true 2026-01-03 00:02:41.777513 | orchestrator | + created = (known after apply) 2026-01-03 00:02:41.777516 | orchestrator | + flavor_id = (known after apply) 2026-01-03 00:02:41.777520 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-03 00:02:41.777524 | orchestrator | + force_delete = false 2026-01-03 00:02:41.777528 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-03 00:02:41.777531 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.777535 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.777539 | orchestrator | + image_name = (known after apply) 2026-01-03 00:02:41.777543 | orchestrator | + key_pair = "testbed" 2026-01-03 00:02:41.777547 | orchestrator | + name = "testbed-node-4" 2026-01-03 00:02:41.777550 | orchestrator | + power_state = "active" 2026-01-03 00:02:41.777554 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.777558 | orchestrator | + security_groups = (known after apply) 2026-01-03 00:02:41.777561 | orchestrator | + stop_before_destroy = false 2026-01-03 00:02:41.777565 | orchestrator | + updated = (known after apply) 2026-01-03 00:02:41.777569 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-03 00:02:41.777573 | orchestrator | 2026-01-03 00:02:41.777576 | orchestrator | + block_device { 2026-01-03 00:02:41.777580 | orchestrator | + boot_index = 0 2026-01-03 00:02:41.777584 | orchestrator | + delete_on_termination = false 2026-01-03 00:02:41.777588 | orchestrator | + destination_type = "volume" 2026-01-03 00:02:41.777592 | orchestrator | + multiattach = false 2026-01-03 00:02:41.777595 | orchestrator | + source_type = "volume" 2026-01-03 00:02:41.777599 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.777603 | orchestrator | } 2026-01-03 00:02:41.777607 | orchestrator | 2026-01-03 00:02:41.777610 | orchestrator | + network { 2026-01-03 00:02:41.777614 | orchestrator | + access_network = false 2026-01-03 00:02:41.777618 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-03 00:02:41.777622 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-03 00:02:41.777626 | orchestrator | + mac = (known after apply) 2026-01-03 00:02:41.777629 | orchestrator | + name = (known after apply) 2026-01-03 00:02:41.777633 | orchestrator | + port = (known after apply) 2026-01-03 00:02:41.777637 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.777641 | orchestrator | } 2026-01-03 00:02:41.777644 | orchestrator | } 2026-01-03 00:02:41.777848 | orchestrator | 2026-01-03 00:02:41.777860 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-01-03 00:02:41.777864 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-03 00:02:41.777868 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-03 00:02:41.777887 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-03 00:02:41.777891 | orchestrator | + all_metadata = (known after apply) 2026-01-03 00:02:41.777895 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.777898 | orchestrator | + availability_zone = "nova" 2026-01-03 00:02:41.777902 | orchestrator | + config_drive = true 2026-01-03 00:02:41.777906 | orchestrator | + created = (known after apply) 2026-01-03 00:02:41.777910 | orchestrator | + flavor_id = (known after apply) 2026-01-03 00:02:41.777914 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-03 00:02:41.777918 | orchestrator | + force_delete = false 2026-01-03 00:02:41.777926 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-03 00:02:41.777930 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.777933 | orchestrator | + image_id = (known after apply) 2026-01-03 00:02:41.777937 | orchestrator | + image_name = (known after apply) 2026-01-03 00:02:41.777941 | orchestrator | + key_pair = "testbed" 2026-01-03 00:02:41.777945 | orchestrator | + name = "testbed-node-5" 2026-01-03 00:02:41.777949 | orchestrator | + power_state = "active" 2026-01-03 00:02:41.777952 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.777956 | orchestrator | + security_groups = (known after apply) 2026-01-03 00:02:41.777960 | orchestrator | + stop_before_destroy = false 2026-01-03 00:02:41.777964 | orchestrator | + updated = (known after apply) 2026-01-03 00:02:41.777968 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-03 00:02:41.777972 | orchestrator | 2026-01-03 00:02:41.777975 | orchestrator | + block_device { 2026-01-03 00:02:41.777979 | orchestrator | + boot_index = 0 2026-01-03 00:02:41.777983 | orchestrator | + delete_on_termination = false 2026-01-03 00:02:41.777987 | orchestrator | + destination_type = "volume" 2026-01-03 00:02:41.777990 | orchestrator | + multiattach = false 2026-01-03 00:02:41.777994 | orchestrator | + source_type = "volume" 2026-01-03 00:02:41.777998 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.778002 | orchestrator | } 2026-01-03 00:02:41.778006 | orchestrator | 2026-01-03 00:02:41.778009 | orchestrator | + network { 2026-01-03 00:02:41.778279 | orchestrator | + access_network = false 2026-01-03 00:02:41.778285 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-03 00:02:41.778289 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-03 00:02:41.778293 | orchestrator | + mac = (known after apply) 2026-01-03 00:02:41.778297 | orchestrator | + name = (known after apply) 2026-01-03 00:02:41.778301 | orchestrator | + port = (known after apply) 2026-01-03 00:02:41.778305 | orchestrator | + uuid = (known after apply) 2026-01-03 00:02:41.778309 | orchestrator | } 2026-01-03 00:02:41.778313 | orchestrator | } 2026-01-03 00:02:41.778378 | orchestrator | 2026-01-03 00:02:41.778391 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-01-03 00:02:41.778396 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-01-03 00:02:41.778400 | orchestrator | + fingerprint = (known after apply) 2026-01-03 00:02:41.778404 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.778408 | orchestrator | + name = "testbed" 2026-01-03 00:02:41.778412 | orchestrator | + private_key = (sensitive value) 2026-01-03 00:02:41.778416 | orchestrator | + public_key = (known after apply) 2026-01-03 00:02:41.778420 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.778424 | orchestrator | + user_id = (known after apply) 2026-01-03 00:02:41.778428 | orchestrator | } 2026-01-03 00:02:41.778469 | orchestrator | 2026-01-03 00:02:41.778480 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-01-03 00:02:41.778485 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-03 00:02:41.778496 | orchestrator | + device = (known after apply) 2026-01-03 00:02:41.778500 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.778504 | orchestrator | + instance_id = (known after apply) 2026-01-03 00:02:41.778508 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.778512 | orchestrator | + volume_id = (known after apply) 2026-01-03 00:02:41.778516 | orchestrator | } 2026-01-03 00:02:41.778553 | orchestrator | 2026-01-03 00:02:41.778563 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-01-03 00:02:41.778568 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-03 00:02:41.778572 | orchestrator | + device = (known after apply) 2026-01-03 00:02:41.778576 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.778580 | orchestrator | + instance_id = (known after apply) 2026-01-03 00:02:41.778584 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.778588 | orchestrator | + volume_id = (known after apply) 2026-01-03 00:02:41.778592 | orchestrator | } 2026-01-03 00:02:41.778640 | orchestrator | 2026-01-03 00:02:41.778651 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-01-03 00:02:41.778656 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-03 00:02:41.778660 | orchestrator | + device = (known after apply) 2026-01-03 00:02:41.778664 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.778668 | orchestrator | + instance_id = (known after apply) 2026-01-03 00:02:41.778672 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.778676 | orchestrator | + volume_id = (known after apply) 2026-01-03 00:02:41.778680 | orchestrator | } 2026-01-03 00:02:41.778723 | orchestrator | 2026-01-03 00:02:41.778734 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-01-03 00:02:41.778739 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-03 00:02:41.778742 | orchestrator | + device = (known after apply) 2026-01-03 00:02:41.778746 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.778750 | orchestrator | + instance_id = (known after apply) 2026-01-03 00:02:41.778754 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.778758 | orchestrator | + volume_id = (known after apply) 2026-01-03 00:02:41.778762 | orchestrator | } 2026-01-03 00:02:41.778798 | orchestrator | 2026-01-03 00:02:41.778809 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-01-03 00:02:41.778813 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-03 00:02:41.778817 | orchestrator | + device = (known after apply) 2026-01-03 00:02:41.778821 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.778825 | orchestrator | + instance_id = (known after apply) 2026-01-03 00:02:41.778832 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.778836 | orchestrator | + volume_id = (known after apply) 2026-01-03 00:02:41.778840 | orchestrator | } 2026-01-03 00:02:41.778913 | orchestrator | 2026-01-03 00:02:41.778926 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-01-03 00:02:41.778931 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-03 00:02:41.778935 | orchestrator | + device = (known after apply) 2026-01-03 00:02:41.778939 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.778942 | orchestrator | + instance_id = (known after apply) 2026-01-03 00:02:41.778946 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.778950 | orchestrator | + volume_id = (known after apply) 2026-01-03 00:02:41.778954 | orchestrator | } 2026-01-03 00:02:41.778997 | orchestrator | 2026-01-03 00:02:41.779009 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-01-03 00:02:41.779013 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-03 00:02:41.779017 | orchestrator | + device = (known after apply) 2026-01-03 00:02:41.779021 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.779025 | orchestrator | + instance_id = (known after apply) 2026-01-03 00:02:41.779029 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.779037 | orchestrator | + volume_id = (known after apply) 2026-01-03 00:02:41.779041 | orchestrator | } 2026-01-03 00:02:41.779083 | orchestrator | 2026-01-03 00:02:41.779094 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-01-03 00:02:41.779099 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-03 00:02:41.779103 | orchestrator | + device = (known after apply) 2026-01-03 00:02:41.779106 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.779110 | orchestrator | + instance_id = (known after apply) 2026-01-03 00:02:41.779114 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.779118 | orchestrator | + volume_id = (known after apply) 2026-01-03 00:02:41.779122 | orchestrator | } 2026-01-03 00:02:41.779156 | orchestrator | 2026-01-03 00:02:41.779167 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-01-03 00:02:41.779172 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-03 00:02:41.779176 | orchestrator | + device = (known after apply) 2026-01-03 00:02:41.779179 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.779183 | orchestrator | + instance_id = (known after apply) 2026-01-03 00:02:41.779187 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.779191 | orchestrator | + volume_id = (known after apply) 2026-01-03 00:02:41.779195 | orchestrator | } 2026-01-03 00:02:41.779230 | orchestrator | 2026-01-03 00:02:41.779241 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-01-03 00:02:41.779247 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-01-03 00:02:41.779250 | orchestrator | + fixed_ip = (known after apply) 2026-01-03 00:02:41.779254 | orchestrator | + floating_ip = (known after apply) 2026-01-03 00:02:41.779258 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.779262 | orchestrator | + port_id = (known after apply) 2026-01-03 00:02:41.779266 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.779269 | orchestrator | } 2026-01-03 00:02:41.779343 | orchestrator | 2026-01-03 00:02:41.779355 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-01-03 00:02:41.779360 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-01-03 00:02:41.779364 | orchestrator | + address = (known after apply) 2026-01-03 00:02:41.779367 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.779371 | orchestrator | + dns_domain = (known after apply) 2026-01-03 00:02:41.779375 | orchestrator | + dns_name = (known after apply) 2026-01-03 00:02:41.779379 | orchestrator | + fixed_ip = (known after apply) 2026-01-03 00:02:41.779383 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.779386 | orchestrator | + pool = "public" 2026-01-03 00:02:41.779390 | orchestrator | + port_id = (known after apply) 2026-01-03 00:02:41.779394 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.779398 | orchestrator | + subnet_id = (known after apply) 2026-01-03 00:02:41.779402 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.779406 | orchestrator | } 2026-01-03 00:02:41.779496 | orchestrator | 2026-01-03 00:02:41.779509 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-01-03 00:02:41.779514 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-01-03 00:02:41.779517 | orchestrator | + admin_state_up = (known after apply) 2026-01-03 00:02:41.779521 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.779525 | orchestrator | + availability_zone_hints = [ 2026-01-03 00:02:41.779529 | orchestrator | + "nova", 2026-01-03 00:02:41.779533 | orchestrator | ] 2026-01-03 00:02:41.779537 | orchestrator | + dns_domain = (known after apply) 2026-01-03 00:02:41.779541 | orchestrator | + external = (known after apply) 2026-01-03 00:02:41.779544 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.779548 | orchestrator | + mtu = (known after apply) 2026-01-03 00:02:41.779552 | orchestrator | + name = "net-testbed-management" 2026-01-03 00:02:41.779556 | orchestrator | + port_security_enabled = (known after apply) 2026-01-03 00:02:41.779564 | orchestrator | + qos_policy_id = (known after apply) 2026-01-03 00:02:41.779568 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.779572 | orchestrator | + shared = (known after apply) 2026-01-03 00:02:41.779576 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.779580 | orchestrator | + transparent_vlan = (known after apply) 2026-01-03 00:02:41.779584 | orchestrator | 2026-01-03 00:02:41.779588 | orchestrator | + segments (known after apply) 2026-01-03 00:02:41.779592 | orchestrator | } 2026-01-03 00:02:41.779721 | orchestrator | 2026-01-03 00:02:41.779734 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-01-03 00:02:41.779739 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-01-03 00:02:41.779743 | orchestrator | + admin_state_up = (known after apply) 2026-01-03 00:02:41.779747 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-03 00:02:41.779751 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-03 00:02:41.779757 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.779761 | orchestrator | + device_id = (known after apply) 2026-01-03 00:02:41.779764 | orchestrator | + device_owner = (known after apply) 2026-01-03 00:02:41.779768 | orchestrator | + dns_assignment = (known after apply) 2026-01-03 00:02:41.779772 | orchestrator | + dns_name = (known after apply) 2026-01-03 00:02:41.779776 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.779779 | orchestrator | + mac_address = (known after apply) 2026-01-03 00:02:41.779783 | orchestrator | + network_id = (known after apply) 2026-01-03 00:02:41.779787 | orchestrator | + port_security_enabled = (known after apply) 2026-01-03 00:02:41.779791 | orchestrator | + qos_policy_id = (known after apply) 2026-01-03 00:02:41.779794 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.779798 | orchestrator | + security_group_ids = (known after apply) 2026-01-03 00:02:41.779802 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.779806 | orchestrator | 2026-01-03 00:02:41.779810 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.779813 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-03 00:02:41.779817 | orchestrator | } 2026-01-03 00:02:41.779821 | orchestrator | 2026-01-03 00:02:41.779825 | orchestrator | + binding (known after apply) 2026-01-03 00:02:41.779828 | orchestrator | 2026-01-03 00:02:41.779832 | orchestrator | + fixed_ip { 2026-01-03 00:02:41.779836 | orchestrator | + ip_address = "192.168.16.5" 2026-01-03 00:02:41.779840 | orchestrator | + subnet_id = (known after apply) 2026-01-03 00:02:41.779844 | orchestrator | } 2026-01-03 00:02:41.779848 | orchestrator | } 2026-01-03 00:02:41.780012 | orchestrator | 2026-01-03 00:02:41.780025 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-01-03 00:02:41.780029 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-03 00:02:41.780033 | orchestrator | + admin_state_up = (known after apply) 2026-01-03 00:02:41.780037 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-03 00:02:41.780041 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-03 00:02:41.780045 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.780049 | orchestrator | + device_id = (known after apply) 2026-01-03 00:02:41.780052 | orchestrator | + device_owner = (known after apply) 2026-01-03 00:02:41.780056 | orchestrator | + dns_assignment = (known after apply) 2026-01-03 00:02:41.780060 | orchestrator | + dns_name = (known after apply) 2026-01-03 00:02:41.780064 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.780067 | orchestrator | + mac_address = (known after apply) 2026-01-03 00:02:41.780072 | orchestrator | + network_id = (known after apply) 2026-01-03 00:02:41.780076 | orchestrator | + port_security_enabled = (known after apply) 2026-01-03 00:02:41.780080 | orchestrator | + qos_policy_id = (known after apply) 2026-01-03 00:02:41.780084 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.780092 | orchestrator | + security_group_ids = (known after apply) 2026-01-03 00:02:41.780095 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.780099 | orchestrator | 2026-01-03 00:02:41.780103 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.780107 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-03 00:02:41.780111 | orchestrator | } 2026-01-03 00:02:41.780115 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.780119 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-03 00:02:41.780122 | orchestrator | } 2026-01-03 00:02:41.780126 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.780130 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-03 00:02:41.780134 | orchestrator | } 2026-01-03 00:02:41.780138 | orchestrator | 2026-01-03 00:02:41.780142 | orchestrator | + binding (known after apply) 2026-01-03 00:02:41.780146 | orchestrator | 2026-01-03 00:02:41.780149 | orchestrator | + fixed_ip { 2026-01-03 00:02:41.780153 | orchestrator | + ip_address = "192.168.16.10" 2026-01-03 00:02:41.780157 | orchestrator | + subnet_id = (known after apply) 2026-01-03 00:02:41.780161 | orchestrator | } 2026-01-03 00:02:41.780165 | orchestrator | } 2026-01-03 00:02:41.780311 | orchestrator | 2026-01-03 00:02:41.780324 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-01-03 00:02:41.780329 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-03 00:02:41.780333 | orchestrator | + admin_state_up = (known after apply) 2026-01-03 00:02:41.780337 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-03 00:02:41.780340 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-03 00:02:41.780345 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.780349 | orchestrator | + device_id = (known after apply) 2026-01-03 00:02:41.780352 | orchestrator | + device_owner = (known after apply) 2026-01-03 00:02:41.780356 | orchestrator | + dns_assignment = (known after apply) 2026-01-03 00:02:41.780360 | orchestrator | + dns_name = (known after apply) 2026-01-03 00:02:41.780364 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.780368 | orchestrator | + mac_address = (known after apply) 2026-01-03 00:02:41.780372 | orchestrator | + network_id = (known after apply) 2026-01-03 00:02:41.780376 | orchestrator | + port_security_enabled = (known after apply) 2026-01-03 00:02:41.780379 | orchestrator | + qos_policy_id = (known after apply) 2026-01-03 00:02:41.780383 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.780387 | orchestrator | + security_group_ids = (known after apply) 2026-01-03 00:02:41.780391 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.780394 | orchestrator | 2026-01-03 00:02:41.780399 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.780402 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-03 00:02:41.780406 | orchestrator | } 2026-01-03 00:02:41.780410 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.780414 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-03 00:02:41.780418 | orchestrator | } 2026-01-03 00:02:41.780421 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.780425 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-03 00:02:41.780429 | orchestrator | } 2026-01-03 00:02:41.780433 | orchestrator | 2026-01-03 00:02:41.780437 | orchestrator | + binding (known after apply) 2026-01-03 00:02:41.780440 | orchestrator | 2026-01-03 00:02:41.780444 | orchestrator | + fixed_ip { 2026-01-03 00:02:41.780448 | orchestrator | + ip_address = "192.168.16.11" 2026-01-03 00:02:41.780452 | orchestrator | + subnet_id = (known after apply) 2026-01-03 00:02:41.780456 | orchestrator | } 2026-01-03 00:02:41.780459 | orchestrator | } 2026-01-03 00:02:41.785661 | orchestrator | 2026-01-03 00:02:41.785706 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-01-03 00:02:41.785712 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-03 00:02:41.785716 | orchestrator | + admin_state_up = (known after apply) 2026-01-03 00:02:41.785722 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-03 00:02:41.785726 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-03 00:02:41.785730 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.785743 | orchestrator | + device_id = (known after apply) 2026-01-03 00:02:41.785747 | orchestrator | + device_owner = (known after apply) 2026-01-03 00:02:41.785751 | orchestrator | + dns_assignment = (known after apply) 2026-01-03 00:02:41.785755 | orchestrator | + dns_name = (known after apply) 2026-01-03 00:02:41.785765 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.785769 | orchestrator | + mac_address = (known after apply) 2026-01-03 00:02:41.785773 | orchestrator | + network_id = (known after apply) 2026-01-03 00:02:41.785776 | orchestrator | + port_security_enabled = (known after apply) 2026-01-03 00:02:41.785780 | orchestrator | + qos_policy_id = (known after apply) 2026-01-03 00:02:41.785784 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.785788 | orchestrator | + security_group_ids = (known after apply) 2026-01-03 00:02:41.785792 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.785795 | orchestrator | 2026-01-03 00:02:41.785799 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.785804 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-03 00:02:41.785808 | orchestrator | } 2026-01-03 00:02:41.785812 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.785815 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-03 00:02:41.785819 | orchestrator | } 2026-01-03 00:02:41.785823 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.785827 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-03 00:02:41.785830 | orchestrator | } 2026-01-03 00:02:41.785834 | orchestrator | 2026-01-03 00:02:41.785838 | orchestrator | + binding (known after apply) 2026-01-03 00:02:41.785842 | orchestrator | 2026-01-03 00:02:41.785846 | orchestrator | + fixed_ip { 2026-01-03 00:02:41.785850 | orchestrator | + ip_address = "192.168.16.12" 2026-01-03 00:02:41.785854 | orchestrator | + subnet_id = (known after apply) 2026-01-03 00:02:41.785858 | orchestrator | } 2026-01-03 00:02:41.785861 | orchestrator | } 2026-01-03 00:02:41.785888 | orchestrator | 2026-01-03 00:02:41.785895 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-01-03 00:02:41.785901 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-03 00:02:41.785907 | orchestrator | + admin_state_up = (known after apply) 2026-01-03 00:02:41.785913 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-03 00:02:41.785919 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-03 00:02:41.785925 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.785931 | orchestrator | + device_id = (known after apply) 2026-01-03 00:02:41.785938 | orchestrator | + device_owner = (known after apply) 2026-01-03 00:02:41.785943 | orchestrator | + dns_assignment = (known after apply) 2026-01-03 00:02:41.785946 | orchestrator | + dns_name = (known after apply) 2026-01-03 00:02:41.785950 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.785954 | orchestrator | + mac_address = (known after apply) 2026-01-03 00:02:41.785957 | orchestrator | + network_id = (known after apply) 2026-01-03 00:02:41.785961 | orchestrator | + port_security_enabled = (known after apply) 2026-01-03 00:02:41.785965 | orchestrator | + qos_policy_id = (known after apply) 2026-01-03 00:02:41.785969 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.785973 | orchestrator | + security_group_ids = (known after apply) 2026-01-03 00:02:41.785976 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.785980 | orchestrator | 2026-01-03 00:02:41.785984 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.785988 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-03 00:02:41.785992 | orchestrator | } 2026-01-03 00:02:41.785996 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.785999 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-03 00:02:41.786003 | orchestrator | } 2026-01-03 00:02:41.786007 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.786011 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-03 00:02:41.786052 | orchestrator | } 2026-01-03 00:02:41.786056 | orchestrator | 2026-01-03 00:02:41.786065 | orchestrator | + binding (known after apply) 2026-01-03 00:02:41.786069 | orchestrator | 2026-01-03 00:02:41.786073 | orchestrator | + fixed_ip { 2026-01-03 00:02:41.786077 | orchestrator | + ip_address = "192.168.16.13" 2026-01-03 00:02:41.786081 | orchestrator | + subnet_id = (known after apply) 2026-01-03 00:02:41.786085 | orchestrator | } 2026-01-03 00:02:41.786089 | orchestrator | } 2026-01-03 00:02:41.786096 | orchestrator | 2026-01-03 00:02:41.786100 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-01-03 00:02:41.786103 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-03 00:02:41.786107 | orchestrator | + admin_state_up = (known after apply) 2026-01-03 00:02:41.786111 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-03 00:02:41.786115 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-03 00:02:41.786119 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.786122 | orchestrator | + device_id = (known after apply) 2026-01-03 00:02:41.786126 | orchestrator | + device_owner = (known after apply) 2026-01-03 00:02:41.786130 | orchestrator | + dns_assignment = (known after apply) 2026-01-03 00:02:41.786134 | orchestrator | + dns_name = (known after apply) 2026-01-03 00:02:41.786137 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.786141 | orchestrator | + mac_address = (known after apply) 2026-01-03 00:02:41.786145 | orchestrator | + network_id = (known after apply) 2026-01-03 00:02:41.786149 | orchestrator | + port_security_enabled = (known after apply) 2026-01-03 00:02:41.786153 | orchestrator | + qos_policy_id = (known after apply) 2026-01-03 00:02:41.786156 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.786160 | orchestrator | + security_group_ids = (known after apply) 2026-01-03 00:02:41.786164 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.786170 | orchestrator | 2026-01-03 00:02:41.786174 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.786178 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-03 00:02:41.786182 | orchestrator | } 2026-01-03 00:02:41.786185 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.786189 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-03 00:02:41.786193 | orchestrator | } 2026-01-03 00:02:41.786197 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.786201 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-03 00:02:41.786204 | orchestrator | } 2026-01-03 00:02:41.786208 | orchestrator | 2026-01-03 00:02:41.786212 | orchestrator | + binding (known after apply) 2026-01-03 00:02:41.786216 | orchestrator | 2026-01-03 00:02:41.786219 | orchestrator | + fixed_ip { 2026-01-03 00:02:41.786223 | orchestrator | + ip_address = "192.168.16.14" 2026-01-03 00:02:41.786227 | orchestrator | + subnet_id = (known after apply) 2026-01-03 00:02:41.786231 | orchestrator | } 2026-01-03 00:02:41.786235 | orchestrator | } 2026-01-03 00:02:41.786241 | orchestrator | 2026-01-03 00:02:41.786245 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-01-03 00:02:41.786249 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-03 00:02:41.786253 | orchestrator | + admin_state_up = (known after apply) 2026-01-03 00:02:41.786257 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-03 00:02:41.786260 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-03 00:02:41.786264 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.786268 | orchestrator | + device_id = (known after apply) 2026-01-03 00:02:41.786272 | orchestrator | + device_owner = (known after apply) 2026-01-03 00:02:41.786276 | orchestrator | + dns_assignment = (known after apply) 2026-01-03 00:02:41.786279 | orchestrator | + dns_name = (known after apply) 2026-01-03 00:02:41.786283 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.786287 | orchestrator | + mac_address = (known after apply) 2026-01-03 00:02:41.786291 | orchestrator | + network_id = (known after apply) 2026-01-03 00:02:41.786294 | orchestrator | + port_security_enabled = (known after apply) 2026-01-03 00:02:41.786298 | orchestrator | + qos_policy_id = (known after apply) 2026-01-03 00:02:41.786306 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.786310 | orchestrator | + security_group_ids = (known after apply) 2026-01-03 00:02:41.786314 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.786318 | orchestrator | 2026-01-03 00:02:41.786322 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.786326 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-03 00:02:41.786329 | orchestrator | } 2026-01-03 00:02:41.786333 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.786337 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-03 00:02:41.786341 | orchestrator | } 2026-01-03 00:02:41.786344 | orchestrator | + allowed_address_pairs { 2026-01-03 00:02:41.786348 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-03 00:02:41.786352 | orchestrator | } 2026-01-03 00:02:41.786356 | orchestrator | 2026-01-03 00:02:41.786363 | orchestrator | + binding (known after apply) 2026-01-03 00:02:41.786367 | orchestrator | 2026-01-03 00:02:41.786371 | orchestrator | + fixed_ip { 2026-01-03 00:02:41.786375 | orchestrator | + ip_address = "192.168.16.15" 2026-01-03 00:02:41.786378 | orchestrator | + subnet_id = (known after apply) 2026-01-03 00:02:41.786382 | orchestrator | } 2026-01-03 00:02:41.786386 | orchestrator | } 2026-01-03 00:02:41.786390 | orchestrator | 2026-01-03 00:02:41.786394 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-01-03 00:02:41.786398 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-01-03 00:02:41.786401 | orchestrator | + force_destroy = false 2026-01-03 00:02:41.786405 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.786409 | orchestrator | + port_id = (known after apply) 2026-01-03 00:02:41.786413 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.786417 | orchestrator | + router_id = (known after apply) 2026-01-03 00:02:41.786420 | orchestrator | + subnet_id = (known after apply) 2026-01-03 00:02:41.786424 | orchestrator | } 2026-01-03 00:02:41.786428 | orchestrator | 2026-01-03 00:02:41.786432 | orchestrator | # openstack_networking_router_v2.router will be created 2026-01-03 00:02:41.786436 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-01-03 00:02:41.786440 | orchestrator | + admin_state_up = (known after apply) 2026-01-03 00:02:41.786443 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.786447 | orchestrator | + availability_zone_hints = [ 2026-01-03 00:02:41.786451 | orchestrator | + "nova", 2026-01-03 00:02:41.786455 | orchestrator | ] 2026-01-03 00:02:41.786458 | orchestrator | + distributed = (known after apply) 2026-01-03 00:02:41.786462 | orchestrator | + enable_snat = (known after apply) 2026-01-03 00:02:41.786466 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-01-03 00:02:41.786470 | orchestrator | + external_qos_policy_id = (known after apply) 2026-01-03 00:02:41.786474 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.786477 | orchestrator | + name = "testbed" 2026-01-03 00:02:41.786481 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.786485 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.786489 | orchestrator | 2026-01-03 00:02:41.786493 | orchestrator | + external_fixed_ip (known after apply) 2026-01-03 00:02:41.786496 | orchestrator | } 2026-01-03 00:02:41.786503 | orchestrator | 2026-01-03 00:02:41.786507 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-01-03 00:02:41.786512 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-01-03 00:02:41.786516 | orchestrator | + description = "ssh" 2026-01-03 00:02:41.786520 | orchestrator | + direction = "ingress" 2026-01-03 00:02:41.786523 | orchestrator | + ethertype = "IPv4" 2026-01-03 00:02:41.786527 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.786531 | orchestrator | + port_range_max = 22 2026-01-03 00:02:41.786535 | orchestrator | + port_range_min = 22 2026-01-03 00:02:41.786539 | orchestrator | + protocol = "tcp" 2026-01-03 00:02:41.786542 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.786549 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-03 00:02:41.786553 | orchestrator | + remote_group_id = (known after apply) 2026-01-03 00:02:41.786559 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-03 00:02:41.786565 | orchestrator | + security_group_id = (known after apply) 2026-01-03 00:02:41.786571 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.786577 | orchestrator | } 2026-01-03 00:02:41.786583 | orchestrator | 2026-01-03 00:02:41.786589 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-01-03 00:02:41.786595 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-01-03 00:02:41.786599 | orchestrator | + description = "wireguard" 2026-01-03 00:02:41.786602 | orchestrator | + direction = "ingress" 2026-01-03 00:02:41.786606 | orchestrator | + ethertype = "IPv4" 2026-01-03 00:02:41.786610 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.786614 | orchestrator | + port_range_max = 51820 2026-01-03 00:02:41.786618 | orchestrator | + port_range_min = 51820 2026-01-03 00:02:41.786622 | orchestrator | + protocol = "udp" 2026-01-03 00:02:41.786625 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.786629 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-03 00:02:41.786633 | orchestrator | + remote_group_id = (known after apply) 2026-01-03 00:02:41.786637 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-03 00:02:41.786640 | orchestrator | + security_group_id = (known after apply) 2026-01-03 00:02:41.786644 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.786648 | orchestrator | } 2026-01-03 00:02:41.786652 | orchestrator | 2026-01-03 00:02:41.786656 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-01-03 00:02:41.786660 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-01-03 00:02:41.786663 | orchestrator | + direction = "ingress" 2026-01-03 00:02:41.786667 | orchestrator | + ethertype = "IPv4" 2026-01-03 00:02:41.786671 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.786675 | orchestrator | + protocol = "tcp" 2026-01-03 00:02:41.786679 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.786682 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-03 00:02:41.786686 | orchestrator | + remote_group_id = (known after apply) 2026-01-03 00:02:41.786700 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-03 00:02:41.786704 | orchestrator | + security_group_id = (known after apply) 2026-01-03 00:02:41.786708 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.786717 | orchestrator | } 2026-01-03 00:02:41.786721 | orchestrator | 2026-01-03 00:02:41.786725 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-01-03 00:02:41.786729 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-01-03 00:02:41.786733 | orchestrator | + direction = "ingress" 2026-01-03 00:02:41.786736 | orchestrator | + ethertype = "IPv4" 2026-01-03 00:02:41.786740 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.786744 | orchestrator | + protocol = "udp" 2026-01-03 00:02:41.786748 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.786752 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-03 00:02:41.786755 | orchestrator | + remote_group_id = (known after apply) 2026-01-03 00:02:41.786759 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-03 00:02:41.786763 | orchestrator | + security_group_id = (known after apply) 2026-01-03 00:02:41.786767 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.786771 | orchestrator | } 2026-01-03 00:02:41.786777 | orchestrator | 2026-01-03 00:02:41.786781 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-01-03 00:02:41.786789 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-01-03 00:02:41.786793 | orchestrator | + direction = "ingress" 2026-01-03 00:02:41.786796 | orchestrator | + ethertype = "IPv4" 2026-01-03 00:02:41.786800 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.786804 | orchestrator | + protocol = "icmp" 2026-01-03 00:02:41.786808 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.786812 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-03 00:02:41.786815 | orchestrator | + remote_group_id = (known after apply) 2026-01-03 00:02:41.786819 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-03 00:02:41.786823 | orchestrator | + security_group_id = (known after apply) 2026-01-03 00:02:41.786827 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.786831 | orchestrator | } 2026-01-03 00:02:41.786835 | orchestrator | 2026-01-03 00:02:41.786838 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-01-03 00:02:41.786842 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-01-03 00:02:41.786846 | orchestrator | + direction = "ingress" 2026-01-03 00:02:41.786850 | orchestrator | + ethertype = "IPv4" 2026-01-03 00:02:41.786854 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.786858 | orchestrator | + protocol = "tcp" 2026-01-03 00:02:41.786861 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.786865 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-03 00:02:41.786888 | orchestrator | + remote_group_id = (known after apply) 2026-01-03 00:02:41.786892 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-03 00:02:41.786896 | orchestrator | + security_group_id = (known after apply) 2026-01-03 00:02:41.786900 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.786904 | orchestrator | } 2026-01-03 00:02:41.786907 | orchestrator | 2026-01-03 00:02:41.786911 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-01-03 00:02:41.786915 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-01-03 00:02:41.786919 | orchestrator | + direction = "ingress" 2026-01-03 00:02:41.786923 | orchestrator | + ethertype = "IPv4" 2026-01-03 00:02:41.786927 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.786930 | orchestrator | + protocol = "udp" 2026-01-03 00:02:41.786934 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.786938 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-03 00:02:41.786942 | orchestrator | + remote_group_id = (known after apply) 2026-01-03 00:02:41.786946 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-03 00:02:41.786949 | orchestrator | + security_group_id = (known after apply) 2026-01-03 00:02:41.786953 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.786957 | orchestrator | } 2026-01-03 00:02:41.786961 | orchestrator | 2026-01-03 00:02:41.786964 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-01-03 00:02:41.786968 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-01-03 00:02:41.786972 | orchestrator | + direction = "ingress" 2026-01-03 00:02:41.786979 | orchestrator | + ethertype = "IPv4" 2026-01-03 00:02:41.786983 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.786986 | orchestrator | + protocol = "icmp" 2026-01-03 00:02:41.786990 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.786994 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-03 00:02:41.786998 | orchestrator | + remote_group_id = (known after apply) 2026-01-03 00:02:41.787002 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-03 00:02:41.787006 | orchestrator | + security_group_id = (known after apply) 2026-01-03 00:02:41.787009 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.787017 | orchestrator | } 2026-01-03 00:02:41.787021 | orchestrator | 2026-01-03 00:02:41.787024 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-01-03 00:02:41.787028 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-01-03 00:02:41.787032 | orchestrator | + description = "vrrp" 2026-01-03 00:02:41.787036 | orchestrator | + direction = "ingress" 2026-01-03 00:02:41.787040 | orchestrator | + ethertype = "IPv4" 2026-01-03 00:02:41.787043 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.787047 | orchestrator | + protocol = "112" 2026-01-03 00:02:41.787051 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.787055 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-03 00:02:41.787059 | orchestrator | + remote_group_id = (known after apply) 2026-01-03 00:02:41.787063 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-03 00:02:41.787066 | orchestrator | + security_group_id = (known after apply) 2026-01-03 00:02:41.787070 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.787074 | orchestrator | } 2026-01-03 00:02:41.787078 | orchestrator | 2026-01-03 00:02:41.787082 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-01-03 00:02:41.787086 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-01-03 00:02:41.787089 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.787093 | orchestrator | + description = "management security group" 2026-01-03 00:02:41.787097 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.787101 | orchestrator | + name = "testbed-management" 2026-01-03 00:02:41.787105 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.787108 | orchestrator | + stateful = (known after apply) 2026-01-03 00:02:41.787112 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.787116 | orchestrator | } 2026-01-03 00:02:41.787122 | orchestrator | 2026-01-03 00:02:41.787126 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-01-03 00:02:41.787130 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-01-03 00:02:41.787134 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.787137 | orchestrator | + description = "node security group" 2026-01-03 00:02:41.787141 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.787145 | orchestrator | + name = "testbed-node" 2026-01-03 00:02:41.787149 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.787152 | orchestrator | + stateful = (known after apply) 2026-01-03 00:02:41.787156 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.787160 | orchestrator | } 2026-01-03 00:02:41.787164 | orchestrator | 2026-01-03 00:02:41.787168 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-01-03 00:02:41.787171 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-01-03 00:02:41.787175 | orchestrator | + all_tags = (known after apply) 2026-01-03 00:02:41.787179 | orchestrator | + cidr = "192.168.16.0/20" 2026-01-03 00:02:41.787183 | orchestrator | + dns_nameservers = [ 2026-01-03 00:02:41.787187 | orchestrator | + "8.8.8.8", 2026-01-03 00:02:41.787191 | orchestrator | + "9.9.9.9", 2026-01-03 00:02:41.787194 | orchestrator | ] 2026-01-03 00:02:41.787198 | orchestrator | + enable_dhcp = true 2026-01-03 00:02:41.787202 | orchestrator | + gateway_ip = (known after apply) 2026-01-03 00:02:41.787206 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.787210 | orchestrator | + ip_version = 4 2026-01-03 00:02:41.787214 | orchestrator | + ipv6_address_mode = (known after apply) 2026-01-03 00:02:41.787217 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-01-03 00:02:41.787221 | orchestrator | + name = "subnet-testbed-management" 2026-01-03 00:02:41.787225 | orchestrator | + network_id = (known after apply) 2026-01-03 00:02:41.787229 | orchestrator | + no_gateway = false 2026-01-03 00:02:41.787233 | orchestrator | + region = (known after apply) 2026-01-03 00:02:41.787236 | orchestrator | + service_types = (known after apply) 2026-01-03 00:02:41.787243 | orchestrator | + tenant_id = (known after apply) 2026-01-03 00:02:41.787247 | orchestrator | 2026-01-03 00:02:41.787251 | orchestrator | + allocation_pool { 2026-01-03 00:02:41.787254 | orchestrator | + end = "192.168.31.250" 2026-01-03 00:02:41.787258 | orchestrator | + start = "192.168.31.200" 2026-01-03 00:02:41.787262 | orchestrator | } 2026-01-03 00:02:41.787266 | orchestrator | } 2026-01-03 00:02:41.787269 | orchestrator | 2026-01-03 00:02:41.787273 | orchestrator | # terraform_data.image will be created 2026-01-03 00:02:41.787277 | orchestrator | + resource "terraform_data" "image" { 2026-01-03 00:02:41.787281 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.787285 | orchestrator | + input = "Ubuntu 24.04" 2026-01-03 00:02:41.787288 | orchestrator | + output = (known after apply) 2026-01-03 00:02:41.787292 | orchestrator | } 2026-01-03 00:02:41.787296 | orchestrator | 2026-01-03 00:02:41.787300 | orchestrator | # terraform_data.image_node will be created 2026-01-03 00:02:41.787304 | orchestrator | + resource "terraform_data" "image_node" { 2026-01-03 00:02:41.787307 | orchestrator | + id = (known after apply) 2026-01-03 00:02:41.787311 | orchestrator | + input = "Ubuntu 24.04" 2026-01-03 00:02:41.787315 | orchestrator | + output = (known after apply) 2026-01-03 00:02:41.787319 | orchestrator | } 2026-01-03 00:02:41.787322 | orchestrator | 2026-01-03 00:02:41.787326 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-01-03 00:02:41.787330 | orchestrator | 2026-01-03 00:02:41.787334 | orchestrator | Changes to Outputs: 2026-01-03 00:02:41.787337 | orchestrator | + manager_address = (sensitive value) 2026-01-03 00:02:41.787341 | orchestrator | + private_key = (sensitive value) 2026-01-03 00:02:41.936796 | orchestrator | terraform_data.image: Creating... 2026-01-03 00:02:41.937098 | orchestrator | terraform_data.image: Creation complete after 0s [id=5dca5559-fbfb-c22a-e32a-dbbb1bc69bc7] 2026-01-03 00:02:41.999569 | orchestrator | terraform_data.image_node: Creating... 2026-01-03 00:02:42.000709 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=c13556c4-689d-1dd4-314c-c921077d0b1c] 2026-01-03 00:02:42.019268 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-01-03 00:02:42.024621 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-01-03 00:02:42.035902 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-01-03 00:02:42.038079 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-01-03 00:02:42.046468 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-01-03 00:02:42.047651 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-01-03 00:02:42.050956 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-01-03 00:02:42.053589 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-01-03 00:02:42.059005 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-01-03 00:02:42.061130 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-01-03 00:02:42.558142 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-03 00:02:42.563578 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-01-03 00:02:42.569272 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-03 00:02:42.575770 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-01-03 00:02:42.710493 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-01-03 00:02:42.719268 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-01-03 00:02:43.330711 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 0s [id=2704870c-ec10-40e3-af19-c76e46fc5e5e] 2026-01-03 00:02:44.808859 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-01-03 00:02:45.729087 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=5af48d4e-a5d9-4c77-9873-39f930691ccf] 2026-01-03 00:02:45.737701 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-01-03 00:02:45.739221 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=38f41ea3-c1f8-47b6-a316-62c713a7ab6d] 2026-01-03 00:02:45.741335 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=0b3b8b78-42ce-4774-a4e8-f10424aa2bf1] 2026-01-03 00:02:45.751272 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-01-03 00:02:45.751884 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-01-03 00:02:45.759047 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=55b36885-901f-4155-8165-44f8903e4943] 2026-01-03 00:02:45.764986 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=d4e0be62-f642-4458-ac3b-093009378a3c] 2026-01-03 00:02:45.765983 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-01-03 00:02:45.770479 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-01-03 00:02:45.780793 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=87925512-ad51-4d26-92cc-5f354ec37d18] 2026-01-03 00:02:45.788781 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-01-03 00:02:45.789792 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=65a14f46-a6f7-4da8-aafc-46a47f969b79] 2026-01-03 00:02:45.803387 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-01-03 00:02:45.806389 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=955110c69255e5f42e27ecaa4d5bb012d64da5f6] 2026-01-03 00:02:45.813805 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=5710b3c7-cfac-4b3c-9149-eeb74f32a79f] 2026-01-03 00:02:45.815515 | orchestrator | local_file.id_rsa_pub: Creating... 2026-01-03 00:02:45.822406 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-01-03 00:02:45.826267 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=655b92807987e9a347e35bf6f3e784eea307b439] 2026-01-03 00:02:45.847301 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=e5f15048-4e23-4094-a7eb-216bc02a3879] 2026-01-03 00:02:46.791674 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=5c877c24-9ed7-4cc9-a1d6-11e6343c504c] 2026-01-03 00:02:46.833364 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=60572c29-05e3-4c7d-b5fd-34b7a8046e7d] 2026-01-03 00:02:46.844089 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-01-03 00:02:49.164713 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=a696006f-841d-4b4f-91fe-873e20a4fba1] 2026-01-03 00:02:49.170940 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=1c9b7723-665d-4a24-8fea-45de865b62a8] 2026-01-03 00:02:49.184289 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=84db6a0e-1808-47c1-b10d-bd69b23c363f] 2026-01-03 00:02:49.201855 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca] 2026-01-03 00:02:49.212851 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=dc3a99a0-d1c9-4d4c-8382-3504c476ee36] 2026-01-03 00:02:49.242323 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=59895bd7-538c-4e45-9afb-83c70c5e027c] 2026-01-03 00:02:50.539755 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=d1c6847d-8a29-453f-bc84-fe039010760b] 2026-01-03 00:02:50.546429 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-01-03 00:02:50.546827 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-01-03 00:02:50.547283 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-01-03 00:02:50.782139 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=24c38c4c-f516-497f-87dc-7b536e49f4c6] 2026-01-03 00:02:50.788111 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-01-03 00:02:50.788183 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-01-03 00:02:50.788192 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-01-03 00:02:50.788220 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-01-03 00:02:50.790520 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-01-03 00:02:50.790814 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-01-03 00:02:50.816718 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=8197d88c-2a07-4ab1-a084-9791d9d89514] 2026-01-03 00:02:50.828794 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-01-03 00:02:50.829195 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-01-03 00:02:50.831125 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-01-03 00:02:51.164698 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=bddbd983-9f03-4ad5-867c-c597b783eb65] 2026-01-03 00:02:51.171383 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-01-03 00:02:51.233967 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=99207649-a2e4-4c72-bcb8-b409ea493f75] 2026-01-03 00:02:51.247239 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-01-03 00:02:51.344442 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=a4d29871-23f7-45ac-a4cf-4b6e0defd7df] 2026-01-03 00:02:51.353070 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-01-03 00:02:51.413885 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=ff22d96b-6dc0-44d7-a03c-4b805267ed11] 2026-01-03 00:02:51.426442 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-01-03 00:02:51.565720 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=c2a9371d-9393-4bff-8b15-9d0ab1454038] 2026-01-03 00:02:51.575901 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-01-03 00:02:51.689112 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=b6015655-98fe-461d-a1a2-d3979f8339d1] 2026-01-03 00:02:51.700231 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-01-03 00:02:51.762622 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=d9c50482-114a-41db-9907-43959156ea88] 2026-01-03 00:02:51.777253 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-01-03 00:02:51.865006 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=eb3f2189-92eb-4375-a01d-b30b4b327bf1] 2026-01-03 00:02:51.914701 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=c4a46c62-0b10-422b-9258-560a8fbbe574] 2026-01-03 00:02:51.997917 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=260a977d-5a4e-41c7-ab98-c194b29d5ada] 2026-01-03 00:02:52.077449 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=dff11b58-fb44-4b93-bf0b-24cfded620b5] 2026-01-03 00:02:52.079268 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=0db73160-3ebe-4b11-8ce3-77f9b9e31027] 2026-01-03 00:02:52.623247 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 2s [id=bb251481-0f92-4a14-9d72-53c3879a4dd9] 2026-01-03 00:02:52.628675 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=d3c9b0b1-331a-483c-8691-e473bbd9dea5] 2026-01-03 00:02:52.641074 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=3956e3f5-5c59-4347-af76-e0750641b8fa] 2026-01-03 00:02:52.851515 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=d9d978d1-841e-4bae-b717-417b6174655d] 2026-01-03 00:02:55.630448 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 5s [id=56933e5d-47ec-461e-9fc1-b0a55fd0e610] 2026-01-03 00:02:55.653918 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-01-03 00:02:55.663202 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-01-03 00:02:55.663282 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-01-03 00:02:55.665518 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-01-03 00:02:55.669235 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-01-03 00:02:55.681272 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-01-03 00:02:55.689669 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-01-03 00:02:57.262686 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=bc2fd73b-1f59-4c98-95fc-2b7705fea570] 2026-01-03 00:02:57.276597 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-01-03 00:02:57.278547 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-01-03 00:02:57.279432 | orchestrator | local_file.inventory: Creating... 2026-01-03 00:02:57.282461 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=57db01c4c43b7ac6548e11ab5dc029e491594aed] 2026-01-03 00:02:57.283604 | orchestrator | local_file.inventory: Creation complete after 0s [id=cccce069542cc08c3d134ac5278e8b10c9f2b75f] 2026-01-03 00:02:58.217028 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=bc2fd73b-1f59-4c98-95fc-2b7705fea570] 2026-01-03 00:03:05.664551 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-01-03 00:03:05.664692 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-01-03 00:03:05.670797 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-01-03 00:03:05.674238 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-01-03 00:03:05.685417 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-01-03 00:03:05.690965 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-01-03 00:03:15.673394 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-01-03 00:03:15.673529 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-01-03 00:03:15.673559 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-01-03 00:03:15.674453 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-01-03 00:03:15.685815 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-01-03 00:03:15.691014 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-01-03 00:03:25.682385 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-01-03 00:03:25.682522 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-01-03 00:03:25.682536 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-01-03 00:03:25.682544 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-01-03 00:03:25.686954 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-01-03 00:03:25.691096 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-01-03 00:03:35.691156 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-01-03 00:03:35.691249 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-01-03 00:03:35.691257 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-01-03 00:03:35.691264 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-01-03 00:03:35.691292 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-01-03 00:03:35.691385 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-01-03 00:03:36.367594 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 40s [id=6f2980f5-a681-43ac-9cde-5c3c721149e1] 2026-01-03 00:03:36.439351 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 40s [id=dbdf40df-c5b3-432e-a0b4-85e704102730] 2026-01-03 00:03:45.698262 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-01-03 00:03:45.698397 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-01-03 00:03:45.698414 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-01-03 00:03:45.698440 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-01-03 00:03:46.539369 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 51s [id=88757212-be5c-49b9-888f-926a6f412c0f] 2026-01-03 00:03:46.552592 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 51s [id=3b9a1aaf-7e5c-4a8d-9605-ad53eed7823e] 2026-01-03 00:03:46.988661 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 51s [id=a1ff168a-81c0-4875-aa8f-292ef1177954] 2026-01-03 00:03:55.706405 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [1m0s elapsed] 2026-01-03 00:03:56.910228 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 1m1s [id=966c667e-9a14-446f-9681-6d8c4f1bc44b] 2026-01-03 00:03:56.920645 | orchestrator | null_resource.node_semaphore: Creating... 2026-01-03 00:03:56.942912 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=3211445846736244459] 2026-01-03 00:03:56.950483 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-01-03 00:03:56.951574 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-01-03 00:03:56.951760 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-01-03 00:03:56.958588 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-01-03 00:03:56.965105 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-01-03 00:03:56.968630 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-01-03 00:03:56.969063 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-01-03 00:03:56.980951 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-01-03 00:03:56.986192 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-01-03 00:03:56.997282 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-01-03 00:04:01.390517 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=3b9a1aaf-7e5c-4a8d-9605-ad53eed7823e/55b36885-901f-4155-8165-44f8903e4943] 2026-01-03 00:04:01.405771 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=88757212-be5c-49b9-888f-926a6f412c0f/e5f15048-4e23-4094-a7eb-216bc02a3879] 2026-01-03 00:04:01.434319 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=dbdf40df-c5b3-432e-a0b4-85e704102730/0b3b8b78-42ce-4774-a4e8-f10424aa2bf1] 2026-01-03 00:04:01.475487 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=dbdf40df-c5b3-432e-a0b4-85e704102730/5af48d4e-a5d9-4c77-9873-39f930691ccf] 2026-01-03 00:04:01.484775 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=88757212-be5c-49b9-888f-926a6f412c0f/65a14f46-a6f7-4da8-aafc-46a47f969b79] 2026-01-03 00:04:01.511335 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=3b9a1aaf-7e5c-4a8d-9605-ad53eed7823e/d4e0be62-f642-4458-ac3b-093009378a3c] 2026-01-03 00:04:06.969841 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Still creating... [10s elapsed] 2026-01-03 00:04:06.985180 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Still creating... [10s elapsed] 2026-01-03 00:04:06.987415 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Still creating... [10s elapsed] 2026-01-03 00:04:07.001870 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-01-03 00:04:07.604388 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 11s [id=dbdf40df-c5b3-432e-a0b4-85e704102730/87925512-ad51-4d26-92cc-5f354ec37d18] 2026-01-03 00:04:07.624681 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 11s [id=88757212-be5c-49b9-888f-926a6f412c0f/38f41ea3-c1f8-47b6-a316-62c713a7ab6d] 2026-01-03 00:04:07.734461 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 11s [id=3b9a1aaf-7e5c-4a8d-9605-ad53eed7823e/5710b3c7-cfac-4b3c-9149-eeb74f32a79f] 2026-01-03 00:04:17.002748 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-01-03 00:04:17.509672 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=064698f2-0dab-49ca-b98d-990b6c2cc084] 2026-01-03 00:04:17.557487 | orchestrator | 2026-01-03 00:04:17.557563 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-01-03 00:04:17.557573 | orchestrator | 2026-01-03 00:04:17.557580 | orchestrator | Outputs: 2026-01-03 00:04:17.557587 | orchestrator | 2026-01-03 00:04:17.557593 | orchestrator | manager_address = 2026-01-03 00:04:17.557600 | orchestrator | private_key = 2026-01-03 00:04:17.885504 | orchestrator | ok: Runtime: 0:01:41.162255 2026-01-03 00:04:17.916291 | 2026-01-03 00:04:17.916442 | TASK [Create infrastructure (stable)] 2026-01-03 00:04:18.450223 | orchestrator | skipping: Conditional result was False 2026-01-03 00:04:18.467306 | 2026-01-03 00:04:18.467485 | TASK [Fetch manager address] 2026-01-03 00:04:18.967633 | orchestrator | ok 2026-01-03 00:04:18.976994 | 2026-01-03 00:04:18.977354 | TASK [Set manager_host address] 2026-01-03 00:04:19.059261 | orchestrator | ok 2026-01-03 00:04:19.069231 | 2026-01-03 00:04:19.069386 | LOOP [Update ansible collections] 2026-01-03 00:04:20.326156 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-03 00:04:20.326562 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-03 00:04:20.326618 | orchestrator | Starting galaxy collection install process 2026-01-03 00:04:20.326654 | orchestrator | Process install dependency map 2026-01-03 00:04:20.326687 | orchestrator | Starting collection install process 2026-01-03 00:04:20.326717 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons' 2026-01-03 00:04:20.326754 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons 2026-01-03 00:04:20.326800 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-01-03 00:04:20.326971 | orchestrator | ok: Item: commons Runtime: 0:00:00.843261 2026-01-03 00:04:21.784091 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-03 00:04:21.784477 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-03 00:04:21.784562 | orchestrator | Starting galaxy collection install process 2026-01-03 00:04:21.784623 | orchestrator | Process install dependency map 2026-01-03 00:04:21.784677 | orchestrator | Starting collection install process 2026-01-03 00:04:21.784726 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services' 2026-01-03 00:04:21.784769 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services 2026-01-03 00:04:21.784809 | orchestrator | osism.services:999.0.0 was installed successfully 2026-01-03 00:04:21.784876 | orchestrator | ok: Item: services Runtime: 0:00:01.116955 2026-01-03 00:04:21.814102 | 2026-01-03 00:04:21.814356 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-03 00:04:32.422222 | orchestrator | ok 2026-01-03 00:04:32.433547 | 2026-01-03 00:04:32.433715 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-03 00:05:32.482764 | orchestrator | ok 2026-01-03 00:05:32.493305 | 2026-01-03 00:05:32.493432 | TASK [Fetch manager ssh hostkey] 2026-01-03 00:05:34.069348 | orchestrator | Output suppressed because no_log was given 2026-01-03 00:05:34.086461 | 2026-01-03 00:05:34.086665 | TASK [Get ssh keypair from terraform environment] 2026-01-03 00:05:34.627497 | orchestrator | ok: Runtime: 0:00:00.008137 2026-01-03 00:05:34.648287 | 2026-01-03 00:05:34.648488 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-03 00:05:34.696848 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-01-03 00:05:34.708813 | 2026-01-03 00:05:34.708965 | TASK [Run manager part 0] 2026-01-03 00:05:35.951915 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-03 00:05:36.019669 | orchestrator | 2026-01-03 00:05:36.019774 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-01-03 00:05:36.019783 | orchestrator | 2026-01-03 00:05:36.019800 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-01-03 00:05:37.981174 | orchestrator | ok: [testbed-manager] 2026-01-03 00:05:37.981230 | orchestrator | 2026-01-03 00:05:37.981257 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-03 00:05:37.981270 | orchestrator | 2026-01-03 00:05:37.981284 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-03 00:05:39.939298 | orchestrator | ok: [testbed-manager] 2026-01-03 00:05:39.939340 | orchestrator | 2026-01-03 00:05:39.939350 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-03 00:05:40.575094 | orchestrator | ok: [testbed-manager] 2026-01-03 00:05:40.575148 | orchestrator | 2026-01-03 00:05:40.575159 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-03 00:05:40.626610 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:05:40.626663 | orchestrator | 2026-01-03 00:05:40.626678 | orchestrator | TASK [Update package cache] **************************************************** 2026-01-03 00:05:40.660924 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:05:40.660974 | orchestrator | 2026-01-03 00:05:40.660984 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-03 00:05:40.700046 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:05:40.700088 | orchestrator | 2026-01-03 00:05:40.700094 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-03 00:05:40.733631 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:05:40.733672 | orchestrator | 2026-01-03 00:05:40.733678 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-03 00:05:40.765158 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:05:40.765210 | orchestrator | 2026-01-03 00:05:40.765220 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-01-03 00:05:40.808218 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:05:40.808273 | orchestrator | 2026-01-03 00:05:40.808285 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-01-03 00:05:40.846318 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:05:40.846372 | orchestrator | 2026-01-03 00:05:40.846384 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-01-03 00:05:41.570540 | orchestrator | changed: [testbed-manager] 2026-01-03 00:05:41.570763 | orchestrator | 2026-01-03 00:05:41.570775 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-01-03 00:08:21.380852 | orchestrator | changed: [testbed-manager] 2026-01-03 00:08:21.381005 | orchestrator | 2026-01-03 00:08:21.381112 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-03 00:09:39.587558 | orchestrator | changed: [testbed-manager] 2026-01-03 00:09:39.587965 | orchestrator | 2026-01-03 00:09:39.588001 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-03 00:10:05.792223 | orchestrator | changed: [testbed-manager] 2026-01-03 00:10:05.792315 | orchestrator | 2026-01-03 00:10:05.792335 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-03 00:10:15.403367 | orchestrator | changed: [testbed-manager] 2026-01-03 00:10:15.403415 | orchestrator | 2026-01-03 00:10:15.403424 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-03 00:10:15.454442 | orchestrator | ok: [testbed-manager] 2026-01-03 00:10:15.454628 | orchestrator | 2026-01-03 00:10:15.454742 | orchestrator | TASK [Get current user] ******************************************************** 2026-01-03 00:10:16.240912 | orchestrator | ok: [testbed-manager] 2026-01-03 00:10:16.240982 | orchestrator | 2026-01-03 00:10:16.240995 | orchestrator | TASK [Create venv directory] *************************************************** 2026-01-03 00:10:16.991934 | orchestrator | changed: [testbed-manager] 2026-01-03 00:10:16.992013 | orchestrator | 2026-01-03 00:10:16.992027 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-01-03 00:10:23.200769 | orchestrator | changed: [testbed-manager] 2026-01-03 00:10:23.200854 | orchestrator | 2026-01-03 00:10:23.200907 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-01-03 00:10:29.106335 | orchestrator | changed: [testbed-manager] 2026-01-03 00:10:29.106378 | orchestrator | 2026-01-03 00:10:29.106387 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-01-03 00:10:32.560728 | orchestrator | changed: [testbed-manager] 2026-01-03 00:10:32.560818 | orchestrator | 2026-01-03 00:10:32.560837 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-01-03 00:10:34.360123 | orchestrator | changed: [testbed-manager] 2026-01-03 00:10:34.360220 | orchestrator | 2026-01-03 00:10:34.360236 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-01-03 00:10:35.499840 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-03 00:10:35.499936 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-03 00:10:35.499952 | orchestrator | 2026-01-03 00:10:35.499965 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-01-03 00:10:35.544436 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-03 00:10:35.544554 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-03 00:10:35.544570 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-03 00:10:35.544583 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-03 00:10:41.185866 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-03 00:10:41.185950 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-03 00:10:41.185965 | orchestrator | 2026-01-03 00:10:41.185978 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-01-03 00:10:41.762249 | orchestrator | changed: [testbed-manager] 2026-01-03 00:10:41.762331 | orchestrator | 2026-01-03 00:10:41.762348 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-01-03 00:13:02.717829 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-01-03 00:13:02.717926 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-01-03 00:13:02.717943 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-01-03 00:13:02.717956 | orchestrator | 2026-01-03 00:13:02.717970 | orchestrator | TASK [Install local collections] *********************************************** 2026-01-03 00:13:06.009214 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-01-03 00:13:06.009306 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-01-03 00:13:06.009323 | orchestrator | 2026-01-03 00:13:06.009337 | orchestrator | PLAY [Create operator user] **************************************************** 2026-01-03 00:13:06.009349 | orchestrator | 2026-01-03 00:13:06.009360 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-03 00:13:07.428496 | orchestrator | ok: [testbed-manager] 2026-01-03 00:13:07.428543 | orchestrator | 2026-01-03 00:13:07.428550 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-03 00:13:07.478527 | orchestrator | ok: [testbed-manager] 2026-01-03 00:13:07.478621 | orchestrator | 2026-01-03 00:13:07.478638 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-03 00:13:07.552927 | orchestrator | ok: [testbed-manager] 2026-01-03 00:13:07.553063 | orchestrator | 2026-01-03 00:13:07.553090 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-03 00:13:08.359855 | orchestrator | changed: [testbed-manager] 2026-01-03 00:13:08.359972 | orchestrator | 2026-01-03 00:13:08.359990 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-03 00:13:09.221188 | orchestrator | changed: [testbed-manager] 2026-01-03 00:13:09.221251 | orchestrator | 2026-01-03 00:13:09.221266 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-03 00:13:10.639273 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-01-03 00:13:10.639431 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-01-03 00:13:10.639448 | orchestrator | 2026-01-03 00:13:10.639482 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-03 00:13:12.258293 | orchestrator | changed: [testbed-manager] 2026-01-03 00:13:12.258615 | orchestrator | 2026-01-03 00:13:12.258644 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-03 00:13:14.217679 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-01-03 00:13:14.217744 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-01-03 00:13:14.217757 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-01-03 00:13:14.217768 | orchestrator | 2026-01-03 00:13:14.217781 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-03 00:13:14.277515 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:13:14.277603 | orchestrator | 2026-01-03 00:13:14.277619 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-03 00:13:14.362324 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:13:14.362422 | orchestrator | 2026-01-03 00:13:14.362441 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-03 00:13:14.943123 | orchestrator | changed: [testbed-manager] 2026-01-03 00:13:14.943170 | orchestrator | 2026-01-03 00:13:14.943177 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-03 00:13:15.012957 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:13:15.013078 | orchestrator | 2026-01-03 00:13:15.013096 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-03 00:13:15.938799 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-03 00:13:15.938897 | orchestrator | changed: [testbed-manager] 2026-01-03 00:13:15.938915 | orchestrator | 2026-01-03 00:13:15.938928 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-03 00:13:15.979280 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:13:15.979358 | orchestrator | 2026-01-03 00:13:15.979373 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-03 00:13:16.019502 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:13:16.020281 | orchestrator | 2026-01-03 00:13:16.020363 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-03 00:13:16.051411 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:13:16.051465 | orchestrator | 2026-01-03 00:13:16.051474 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-03 00:13:16.108731 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:13:16.108774 | orchestrator | 2026-01-03 00:13:16.108780 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-03 00:13:16.866452 | orchestrator | ok: [testbed-manager] 2026-01-03 00:13:16.866519 | orchestrator | 2026-01-03 00:13:16.866529 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-03 00:13:16.866537 | orchestrator | 2026-01-03 00:13:16.866545 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-03 00:13:18.240350 | orchestrator | ok: [testbed-manager] 2026-01-03 00:13:18.240443 | orchestrator | 2026-01-03 00:13:18.240459 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-01-03 00:13:19.252468 | orchestrator | changed: [testbed-manager] 2026-01-03 00:13:19.252559 | orchestrator | 2026-01-03 00:13:19.252575 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:13:19.252589 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-01-03 00:13:19.252601 | orchestrator | 2026-01-03 00:13:19.532879 | orchestrator | ok: Runtime: 0:07:44.254305 2026-01-03 00:13:19.554284 | 2026-01-03 00:13:19.554424 | TASK [Point out that the log in on the manager is now possible] 2026-01-03 00:13:19.611319 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-01-03 00:13:19.622244 | 2026-01-03 00:13:19.622384 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-03 00:13:19.669911 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-01-03 00:13:19.682310 | 2026-01-03 00:13:19.682463 | TASK [Run manager part 1 + 2] 2026-01-03 00:13:20.995748 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-03 00:13:21.062230 | orchestrator | 2026-01-03 00:13:21.062282 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-01-03 00:13:21.062289 | orchestrator | 2026-01-03 00:13:21.062301 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-03 00:13:24.497726 | orchestrator | ok: [testbed-manager] 2026-01-03 00:13:24.497822 | orchestrator | 2026-01-03 00:13:24.497881 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-03 00:13:24.545255 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:13:24.545310 | orchestrator | 2026-01-03 00:13:24.545319 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-03 00:13:24.601078 | orchestrator | ok: [testbed-manager] 2026-01-03 00:13:24.601127 | orchestrator | 2026-01-03 00:13:24.601139 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-03 00:13:24.646809 | orchestrator | ok: [testbed-manager] 2026-01-03 00:13:24.646873 | orchestrator | 2026-01-03 00:13:24.646889 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-03 00:13:24.728295 | orchestrator | ok: [testbed-manager] 2026-01-03 00:13:24.728387 | orchestrator | 2026-01-03 00:13:24.728407 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-03 00:13:24.800043 | orchestrator | ok: [testbed-manager] 2026-01-03 00:13:24.800123 | orchestrator | 2026-01-03 00:13:24.800142 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-03 00:13:24.856031 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-01-03 00:13:24.856113 | orchestrator | 2026-01-03 00:13:24.856127 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-03 00:13:25.573135 | orchestrator | ok: [testbed-manager] 2026-01-03 00:13:25.573251 | orchestrator | 2026-01-03 00:13:25.573262 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-03 00:13:25.626591 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:13:25.626645 | orchestrator | 2026-01-03 00:13:25.626652 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-03 00:13:27.021048 | orchestrator | changed: [testbed-manager] 2026-01-03 00:13:27.021102 | orchestrator | 2026-01-03 00:13:27.021111 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-03 00:13:27.622244 | orchestrator | ok: [testbed-manager] 2026-01-03 00:13:27.622332 | orchestrator | 2026-01-03 00:13:27.622348 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-03 00:13:28.768230 | orchestrator | changed: [testbed-manager] 2026-01-03 00:13:28.768295 | orchestrator | 2026-01-03 00:13:28.768313 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-03 00:13:44.519884 | orchestrator | changed: [testbed-manager] 2026-01-03 00:13:44.520048 | orchestrator | 2026-01-03 00:13:44.520069 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-03 00:13:45.197188 | orchestrator | ok: [testbed-manager] 2026-01-03 00:13:45.197253 | orchestrator | 2026-01-03 00:13:45.197262 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-03 00:13:45.256551 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:13:45.256651 | orchestrator | 2026-01-03 00:13:45.256668 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-01-03 00:13:46.242833 | orchestrator | changed: [testbed-manager] 2026-01-03 00:13:46.242904 | orchestrator | 2026-01-03 00:13:46.242916 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-01-03 00:13:47.209809 | orchestrator | changed: [testbed-manager] 2026-01-03 00:13:47.209857 | orchestrator | 2026-01-03 00:13:47.209863 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-01-03 00:13:47.793205 | orchestrator | changed: [testbed-manager] 2026-01-03 00:13:47.793273 | orchestrator | 2026-01-03 00:13:47.793290 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-01-03 00:13:47.837804 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-03 00:13:47.838110 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-03 00:13:47.838137 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-03 00:13:47.838149 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-03 00:13:49.925996 | orchestrator | changed: [testbed-manager] 2026-01-03 00:13:49.926862 | orchestrator | 2026-01-03 00:13:49.926891 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-01-03 00:13:59.946864 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-01-03 00:13:59.946960 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-01-03 00:13:59.946972 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-01-03 00:13:59.946979 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-01-03 00:13:59.946990 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-01-03 00:13:59.946996 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-01-03 00:13:59.947002 | orchestrator | 2026-01-03 00:13:59.947009 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-01-03 00:14:01.018088 | orchestrator | changed: [testbed-manager] 2026-01-03 00:14:01.018138 | orchestrator | 2026-01-03 00:14:01.018147 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-01-03 00:14:01.056160 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:14:01.056244 | orchestrator | 2026-01-03 00:14:01.056259 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-01-03 00:14:04.214637 | orchestrator | changed: [testbed-manager] 2026-01-03 00:14:04.214729 | orchestrator | 2026-01-03 00:14:04.214745 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-01-03 00:14:04.258429 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:14:04.258515 | orchestrator | 2026-01-03 00:14:04.258531 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-01-03 00:15:44.792496 | orchestrator | changed: [testbed-manager] 2026-01-03 00:15:44.792601 | orchestrator | 2026-01-03 00:15:44.792621 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-03 00:15:45.967981 | orchestrator | ok: [testbed-manager] 2026-01-03 00:15:45.968076 | orchestrator | 2026-01-03 00:15:45.968093 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:15:45.968107 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-01-03 00:15:45.968119 | orchestrator | 2026-01-03 00:15:46.336439 | orchestrator | ok: Runtime: 0:02:26.044027 2026-01-03 00:15:46.355786 | 2026-01-03 00:15:46.355972 | TASK [Reboot manager] 2026-01-03 00:15:47.898650 | orchestrator | ok: Runtime: 0:00:00.950600 2026-01-03 00:15:47.907464 | 2026-01-03 00:15:47.907586 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-03 00:16:04.333865 | orchestrator | ok 2026-01-03 00:16:04.344104 | 2026-01-03 00:16:04.344252 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-03 00:17:04.387918 | orchestrator | ok 2026-01-03 00:17:04.395759 | 2026-01-03 00:17:04.395889 | TASK [Deploy manager + bootstrap nodes] 2026-01-03 00:17:07.048994 | orchestrator | 2026-01-03 00:17:07.049186 | orchestrator | # DEPLOY MANAGER 2026-01-03 00:17:07.049213 | orchestrator | 2026-01-03 00:17:07.049228 | orchestrator | + set -e 2026-01-03 00:17:07.049241 | orchestrator | + echo 2026-01-03 00:17:07.049256 | orchestrator | + echo '# DEPLOY MANAGER' 2026-01-03 00:17:07.049273 | orchestrator | + echo 2026-01-03 00:17:07.049324 | orchestrator | + cat /opt/manager-vars.sh 2026-01-03 00:17:07.052400 | orchestrator | export NUMBER_OF_NODES=6 2026-01-03 00:17:07.052503 | orchestrator | 2026-01-03 00:17:07.052526 | orchestrator | export CEPH_VERSION=reef 2026-01-03 00:17:07.052544 | orchestrator | export CONFIGURATION_VERSION=main 2026-01-03 00:17:07.052560 | orchestrator | export MANAGER_VERSION=latest 2026-01-03 00:17:07.052594 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-01-03 00:17:07.052610 | orchestrator | 2026-01-03 00:17:07.052632 | orchestrator | export ARA=false 2026-01-03 00:17:07.052648 | orchestrator | export DEPLOY_MODE=manager 2026-01-03 00:17:07.052670 | orchestrator | export TEMPEST=true 2026-01-03 00:17:07.052686 | orchestrator | export IS_ZUUL=true 2026-01-03 00:17:07.052702 | orchestrator | 2026-01-03 00:17:07.052724 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2026-01-03 00:17:07.052741 | orchestrator | export EXTERNAL_API=false 2026-01-03 00:17:07.052755 | orchestrator | 2026-01-03 00:17:07.052801 | orchestrator | export IMAGE_USER=ubuntu 2026-01-03 00:17:07.052834 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-01-03 00:17:07.052848 | orchestrator | 2026-01-03 00:17:07.052863 | orchestrator | export CEPH_STACK=ceph-ansible 2026-01-03 00:17:07.052893 | orchestrator | 2026-01-03 00:17:07.052907 | orchestrator | + echo 2026-01-03 00:17:07.052923 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-03 00:17:07.053453 | orchestrator | ++ export INTERACTIVE=false 2026-01-03 00:17:07.053484 | orchestrator | ++ INTERACTIVE=false 2026-01-03 00:17:07.053494 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-03 00:17:07.053504 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-03 00:17:07.053638 | orchestrator | + source /opt/manager-vars.sh 2026-01-03 00:17:07.053652 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-03 00:17:07.053661 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-03 00:17:07.053669 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-03 00:17:07.053746 | orchestrator | ++ CEPH_VERSION=reef 2026-01-03 00:17:07.053759 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-03 00:17:07.053790 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-03 00:17:07.053803 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-03 00:17:07.053813 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-03 00:17:07.053829 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-03 00:17:07.053849 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-03 00:17:07.053858 | orchestrator | ++ export ARA=false 2026-01-03 00:17:07.053867 | orchestrator | ++ ARA=false 2026-01-03 00:17:07.053880 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-03 00:17:07.053889 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-03 00:17:07.053897 | orchestrator | ++ export TEMPEST=true 2026-01-03 00:17:07.053906 | orchestrator | ++ TEMPEST=true 2026-01-03 00:17:07.053920 | orchestrator | ++ export IS_ZUUL=true 2026-01-03 00:17:07.053929 | orchestrator | ++ IS_ZUUL=true 2026-01-03 00:17:07.053938 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2026-01-03 00:17:07.053947 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2026-01-03 00:17:07.053955 | orchestrator | ++ export EXTERNAL_API=false 2026-01-03 00:17:07.053967 | orchestrator | ++ EXTERNAL_API=false 2026-01-03 00:17:07.053976 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-03 00:17:07.053984 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-03 00:17:07.053993 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-03 00:17:07.054002 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-03 00:17:07.054010 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-03 00:17:07.054058 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-03 00:17:07.054068 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-01-03 00:17:07.108047 | orchestrator | + docker version 2026-01-03 00:17:07.365509 | orchestrator | Client: Docker Engine - Community 2026-01-03 00:17:07.365626 | orchestrator | Version: 27.5.1 2026-01-03 00:17:07.365645 | orchestrator | API version: 1.47 2026-01-03 00:17:07.365661 | orchestrator | Go version: go1.22.11 2026-01-03 00:17:07.365674 | orchestrator | Git commit: 9f9e405 2026-01-03 00:17:07.365687 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-03 00:17:07.365698 | orchestrator | OS/Arch: linux/amd64 2026-01-03 00:17:07.365706 | orchestrator | Context: default 2026-01-03 00:17:07.365713 | orchestrator | 2026-01-03 00:17:07.365722 | orchestrator | Server: Docker Engine - Community 2026-01-03 00:17:07.365729 | orchestrator | Engine: 2026-01-03 00:17:07.365736 | orchestrator | Version: 27.5.1 2026-01-03 00:17:07.365745 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-01-03 00:17:07.365836 | orchestrator | Go version: go1.22.11 2026-01-03 00:17:07.365847 | orchestrator | Git commit: 4c9b3b0 2026-01-03 00:17:07.365855 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-03 00:17:07.365862 | orchestrator | OS/Arch: linux/amd64 2026-01-03 00:17:07.365869 | orchestrator | Experimental: false 2026-01-03 00:17:07.365876 | orchestrator | containerd: 2026-01-03 00:17:07.365900 | orchestrator | Version: v2.2.1 2026-01-03 00:17:07.365914 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-01-03 00:17:07.365926 | orchestrator | runc: 2026-01-03 00:17:07.365938 | orchestrator | Version: 1.3.4 2026-01-03 00:17:07.365951 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-01-03 00:17:07.365964 | orchestrator | docker-init: 2026-01-03 00:17:07.365971 | orchestrator | Version: 0.19.0 2026-01-03 00:17:07.365980 | orchestrator | GitCommit: de40ad0 2026-01-03 00:17:07.369114 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-01-03 00:17:07.376962 | orchestrator | + set -e 2026-01-03 00:17:07.377004 | orchestrator | + source /opt/manager-vars.sh 2026-01-03 00:17:07.377014 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-03 00:17:07.377023 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-03 00:17:07.377030 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-03 00:17:07.377038 | orchestrator | ++ CEPH_VERSION=reef 2026-01-03 00:17:07.377045 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-03 00:17:07.377053 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-03 00:17:07.377061 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-03 00:17:07.377068 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-03 00:17:07.377076 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-03 00:17:07.377084 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-03 00:17:07.377090 | orchestrator | ++ export ARA=false 2026-01-03 00:17:07.377096 | orchestrator | ++ ARA=false 2026-01-03 00:17:07.377103 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-03 00:17:07.377109 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-03 00:17:07.377115 | orchestrator | ++ export TEMPEST=true 2026-01-03 00:17:07.377121 | orchestrator | ++ TEMPEST=true 2026-01-03 00:17:07.377127 | orchestrator | ++ export IS_ZUUL=true 2026-01-03 00:17:07.377133 | orchestrator | ++ IS_ZUUL=true 2026-01-03 00:17:07.377140 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2026-01-03 00:17:07.377146 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2026-01-03 00:17:07.377152 | orchestrator | ++ export EXTERNAL_API=false 2026-01-03 00:17:07.377158 | orchestrator | ++ EXTERNAL_API=false 2026-01-03 00:17:07.377164 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-03 00:17:07.377170 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-03 00:17:07.377176 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-03 00:17:07.377182 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-03 00:17:07.377188 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-03 00:17:07.377194 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-03 00:17:07.377201 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-03 00:17:07.377207 | orchestrator | ++ export INTERACTIVE=false 2026-01-03 00:17:07.377213 | orchestrator | ++ INTERACTIVE=false 2026-01-03 00:17:07.377219 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-03 00:17:07.377229 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-03 00:17:07.377235 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-03 00:17:07.377241 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-03 00:17:07.377247 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-01-03 00:17:07.383909 | orchestrator | + set -e 2026-01-03 00:17:07.383975 | orchestrator | + VERSION=reef 2026-01-03 00:17:07.384879 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-03 00:17:07.390679 | orchestrator | + [[ -n ceph_version: reef ]] 2026-01-03 00:17:07.390720 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-01-03 00:17:07.395785 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-01-03 00:17:07.400618 | orchestrator | + set -e 2026-01-03 00:17:07.400693 | orchestrator | + VERSION=2024.2 2026-01-03 00:17:07.401023 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-03 00:17:07.404537 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-01-03 00:17:07.404569 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-01-03 00:17:07.409479 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-01-03 00:17:07.410327 | orchestrator | ++ semver latest 7.0.0 2026-01-03 00:17:07.456310 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-03 00:17:07.456434 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-03 00:17:07.456450 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-01-03 00:17:07.456796 | orchestrator | ++ semver latest 10.0.0-0 2026-01-03 00:17:07.498661 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-03 00:17:07.499107 | orchestrator | ++ semver 2024.2 2025.1 2026-01-03 00:17:07.552822 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-03 00:17:07.552940 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-01-03 00:17:07.641295 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-03 00:17:07.642569 | orchestrator | + source /opt/venv/bin/activate 2026-01-03 00:17:07.643741 | orchestrator | ++ deactivate nondestructive 2026-01-03 00:17:07.643894 | orchestrator | ++ '[' -n '' ']' 2026-01-03 00:17:07.643910 | orchestrator | ++ '[' -n '' ']' 2026-01-03 00:17:07.643922 | orchestrator | ++ hash -r 2026-01-03 00:17:07.643946 | orchestrator | ++ '[' -n '' ']' 2026-01-03 00:17:07.643957 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-03 00:17:07.643968 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-03 00:17:07.643982 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-03 00:17:07.644011 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-03 00:17:07.644022 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-03 00:17:07.644034 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-03 00:17:07.644044 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-03 00:17:07.644056 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-03 00:17:07.644077 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-03 00:17:07.644089 | orchestrator | ++ export PATH 2026-01-03 00:17:07.644225 | orchestrator | ++ '[' -n '' ']' 2026-01-03 00:17:07.644243 | orchestrator | ++ '[' -z '' ']' 2026-01-03 00:17:07.644386 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-03 00:17:07.644415 | orchestrator | ++ PS1='(venv) ' 2026-01-03 00:17:07.644426 | orchestrator | ++ export PS1 2026-01-03 00:17:07.644438 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-03 00:17:07.644464 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-03 00:17:07.644475 | orchestrator | ++ hash -r 2026-01-03 00:17:07.644525 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-01-03 00:17:08.806622 | orchestrator | 2026-01-03 00:17:08.806738 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-01-03 00:17:08.806754 | orchestrator | 2026-01-03 00:17:08.806765 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-03 00:17:09.360808 | orchestrator | ok: [testbed-manager] 2026-01-03 00:17:09.360916 | orchestrator | 2026-01-03 00:17:09.360934 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-03 00:17:10.348156 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:10.348267 | orchestrator | 2026-01-03 00:17:10.348283 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-01-03 00:17:10.348297 | orchestrator | 2026-01-03 00:17:10.348308 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-03 00:17:12.611229 | orchestrator | ok: [testbed-manager] 2026-01-03 00:17:12.611339 | orchestrator | 2026-01-03 00:17:12.611354 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-01-03 00:17:12.663430 | orchestrator | ok: [testbed-manager] 2026-01-03 00:17:12.663526 | orchestrator | 2026-01-03 00:17:12.663542 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-01-03 00:17:13.114749 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:13.114878 | orchestrator | 2026-01-03 00:17:13.114897 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-01-03 00:17:13.147050 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:17:13.147443 | orchestrator | 2026-01-03 00:17:13.147468 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-03 00:17:13.480570 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:13.480689 | orchestrator | 2026-01-03 00:17:13.480707 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2026-01-03 00:17:13.534862 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:17:13.534960 | orchestrator | 2026-01-03 00:17:13.534972 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-01-03 00:17:13.867612 | orchestrator | ok: [testbed-manager] 2026-01-03 00:17:13.868018 | orchestrator | 2026-01-03 00:17:13.868046 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-01-03 00:17:13.992822 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:17:13.992929 | orchestrator | 2026-01-03 00:17:13.992947 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-01-03 00:17:13.992961 | orchestrator | 2026-01-03 00:17:13.992972 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-03 00:17:15.677338 | orchestrator | ok: [testbed-manager] 2026-01-03 00:17:15.677441 | orchestrator | 2026-01-03 00:17:15.677458 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-01-03 00:17:15.762124 | orchestrator | included: osism.services.traefik for testbed-manager 2026-01-03 00:17:15.762237 | orchestrator | 2026-01-03 00:17:15.762262 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-01-03 00:17:15.817447 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-01-03 00:17:15.817527 | orchestrator | 2026-01-03 00:17:15.817539 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-01-03 00:17:16.921713 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-01-03 00:17:16.921873 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-01-03 00:17:16.921891 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-01-03 00:17:16.921904 | orchestrator | 2026-01-03 00:17:16.921916 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-01-03 00:17:18.720424 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-01-03 00:17:18.720537 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-01-03 00:17:18.720558 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-01-03 00:17:18.720570 | orchestrator | 2026-01-03 00:17:18.720583 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-01-03 00:17:19.338164 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-03 00:17:19.338280 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:19.338306 | orchestrator | 2026-01-03 00:17:19.338328 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-01-03 00:17:19.971995 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-03 00:17:19.972129 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:19.972158 | orchestrator | 2026-01-03 00:17:19.972180 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-01-03 00:17:20.018941 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:17:20.019060 | orchestrator | 2026-01-03 00:17:20.019086 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-01-03 00:17:20.389057 | orchestrator | ok: [testbed-manager] 2026-01-03 00:17:20.389167 | orchestrator | 2026-01-03 00:17:20.389246 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-01-03 00:17:20.458128 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-01-03 00:17:20.458210 | orchestrator | 2026-01-03 00:17:20.458222 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-01-03 00:17:21.542320 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:21.542426 | orchestrator | 2026-01-03 00:17:21.542448 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-01-03 00:17:22.358330 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:22.358436 | orchestrator | 2026-01-03 00:17:22.358453 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-01-03 00:17:33.213440 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:33.213538 | orchestrator | 2026-01-03 00:17:33.213550 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-01-03 00:17:33.267746 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:17:33.267916 | orchestrator | 2026-01-03 00:17:33.267932 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-01-03 00:17:33.267945 | orchestrator | 2026-01-03 00:17:33.267988 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-03 00:17:35.087284 | orchestrator | ok: [testbed-manager] 2026-01-03 00:17:35.087385 | orchestrator | 2026-01-03 00:17:35.087401 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-01-03 00:17:35.200573 | orchestrator | included: osism.services.manager for testbed-manager 2026-01-03 00:17:35.200668 | orchestrator | 2026-01-03 00:17:35.200683 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-01-03 00:17:35.254436 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-01-03 00:17:35.254527 | orchestrator | 2026-01-03 00:17:35.254541 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-01-03 00:17:37.819241 | orchestrator | ok: [testbed-manager] 2026-01-03 00:17:37.819343 | orchestrator | 2026-01-03 00:17:37.819360 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-01-03 00:17:37.875210 | orchestrator | ok: [testbed-manager] 2026-01-03 00:17:37.875295 | orchestrator | 2026-01-03 00:17:37.875309 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-01-03 00:17:38.006474 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-01-03 00:17:38.006569 | orchestrator | 2026-01-03 00:17:38.006585 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-01-03 00:17:40.690676 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-01-03 00:17:40.690840 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-01-03 00:17:40.690857 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-01-03 00:17:40.690870 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-01-03 00:17:40.690881 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-01-03 00:17:40.690893 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-01-03 00:17:40.690904 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-01-03 00:17:40.690915 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-01-03 00:17:40.690927 | orchestrator | 2026-01-03 00:17:40.690939 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-01-03 00:17:41.364655 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:41.364846 | orchestrator | 2026-01-03 00:17:41.364866 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-01-03 00:17:41.986664 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:41.986815 | orchestrator | 2026-01-03 00:17:41.986831 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-01-03 00:17:42.064609 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-01-03 00:17:42.064699 | orchestrator | 2026-01-03 00:17:42.064709 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-01-03 00:17:43.213879 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-01-03 00:17:43.214002 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-01-03 00:17:43.214096 | orchestrator | 2026-01-03 00:17:43.214120 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-01-03 00:17:43.835426 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:43.835546 | orchestrator | 2026-01-03 00:17:43.835564 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-01-03 00:17:43.893263 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:17:43.893356 | orchestrator | 2026-01-03 00:17:43.893370 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-01-03 00:17:43.972047 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-01-03 00:17:43.972132 | orchestrator | 2026-01-03 00:17:43.972146 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-01-03 00:17:44.574321 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:44.574445 | orchestrator | 2026-01-03 00:17:44.574516 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-01-03 00:17:44.629495 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-01-03 00:17:44.629608 | orchestrator | 2026-01-03 00:17:44.629632 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-01-03 00:17:45.961303 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-03 00:17:45.961372 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-03 00:17:45.961378 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:45.961383 | orchestrator | 2026-01-03 00:17:45.961388 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-01-03 00:17:46.586879 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:46.586987 | orchestrator | 2026-01-03 00:17:46.587005 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-01-03 00:17:46.641134 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:17:46.641227 | orchestrator | 2026-01-03 00:17:46.641242 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-01-03 00:17:46.735172 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-01-03 00:17:46.735259 | orchestrator | 2026-01-03 00:17:46.735297 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-01-03 00:17:47.251400 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:47.251500 | orchestrator | 2026-01-03 00:17:47.251517 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-01-03 00:17:47.643869 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:47.643970 | orchestrator | 2026-01-03 00:17:47.643987 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-01-03 00:17:48.846279 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-01-03 00:17:48.846402 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-01-03 00:17:48.846420 | orchestrator | 2026-01-03 00:17:48.846435 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-01-03 00:17:49.477581 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:49.477713 | orchestrator | 2026-01-03 00:17:49.477731 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-01-03 00:17:49.868042 | orchestrator | ok: [testbed-manager] 2026-01-03 00:17:49.868137 | orchestrator | 2026-01-03 00:17:49.868152 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-01-03 00:17:50.210252 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:50.210363 | orchestrator | 2026-01-03 00:17:50.210380 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-01-03 00:17:50.260119 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:17:50.260210 | orchestrator | 2026-01-03 00:17:50.260224 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-01-03 00:17:50.325117 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-01-03 00:17:50.325213 | orchestrator | 2026-01-03 00:17:50.325228 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-01-03 00:17:50.369370 | orchestrator | ok: [testbed-manager] 2026-01-03 00:17:50.369468 | orchestrator | 2026-01-03 00:17:50.369487 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-01-03 00:17:52.346890 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-01-03 00:17:52.346995 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-01-03 00:17:52.347009 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-01-03 00:17:52.347020 | orchestrator | 2026-01-03 00:17:52.347031 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-01-03 00:17:53.047087 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:53.047191 | orchestrator | 2026-01-03 00:17:53.047210 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-01-03 00:17:53.736474 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:53.736574 | orchestrator | 2026-01-03 00:17:53.736594 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-01-03 00:17:54.419894 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:54.420038 | orchestrator | 2026-01-03 00:17:54.420067 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-01-03 00:17:54.476941 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-01-03 00:17:54.477035 | orchestrator | 2026-01-03 00:17:54.477054 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-01-03 00:17:54.516123 | orchestrator | ok: [testbed-manager] 2026-01-03 00:17:54.516229 | orchestrator | 2026-01-03 00:17:54.516245 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-01-03 00:17:55.247904 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-01-03 00:17:55.248086 | orchestrator | 2026-01-03 00:17:55.248103 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-01-03 00:17:55.323189 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-01-03 00:17:55.323277 | orchestrator | 2026-01-03 00:17:55.323290 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-01-03 00:17:55.985879 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:55.985972 | orchestrator | 2026-01-03 00:17:55.985986 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-01-03 00:17:56.575446 | orchestrator | ok: [testbed-manager] 2026-01-03 00:17:56.575543 | orchestrator | 2026-01-03 00:17:56.575561 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-01-03 00:17:56.630960 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:17:56.631067 | orchestrator | 2026-01-03 00:17:56.631085 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-01-03 00:17:56.689891 | orchestrator | ok: [testbed-manager] 2026-01-03 00:17:56.690087 | orchestrator | 2026-01-03 00:17:56.690120 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-01-03 00:17:57.490279 | orchestrator | changed: [testbed-manager] 2026-01-03 00:17:57.490383 | orchestrator | 2026-01-03 00:17:57.490399 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-01-03 00:19:05.264779 | orchestrator | changed: [testbed-manager] 2026-01-03 00:19:05.264866 | orchestrator | 2026-01-03 00:19:05.264878 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-01-03 00:19:06.230843 | orchestrator | ok: [testbed-manager] 2026-01-03 00:19:06.230942 | orchestrator | 2026-01-03 00:19:06.230957 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-01-03 00:19:06.277725 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:19:06.277863 | orchestrator | 2026-01-03 00:19:06.277879 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-01-03 00:19:08.875225 | orchestrator | changed: [testbed-manager] 2026-01-03 00:19:08.875337 | orchestrator | 2026-01-03 00:19:08.875378 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-01-03 00:19:08.922784 | orchestrator | ok: [testbed-manager] 2026-01-03 00:19:08.922891 | orchestrator | 2026-01-03 00:19:08.922908 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-03 00:19:08.922921 | orchestrator | 2026-01-03 00:19:08.922932 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-01-03 00:19:08.977996 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:19:08.978137 | orchestrator | 2026-01-03 00:19:08.978151 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-01-03 00:20:09.028827 | orchestrator | Pausing for 60 seconds 2026-01-03 00:20:09.028947 | orchestrator | changed: [testbed-manager] 2026-01-03 00:20:09.028971 | orchestrator | 2026-01-03 00:20:09.028997 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-01-03 00:20:12.256759 | orchestrator | changed: [testbed-manager] 2026-01-03 00:20:12.256867 | orchestrator | 2026-01-03 00:20:12.256884 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-01-03 00:20:53.691515 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-01-03 00:20:53.691659 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-01-03 00:20:53.691677 | orchestrator | changed: [testbed-manager] 2026-01-03 00:20:53.691740 | orchestrator | 2026-01-03 00:20:53.691754 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-01-03 00:21:03.802207 | orchestrator | changed: [testbed-manager] 2026-01-03 00:21:03.802305 | orchestrator | 2026-01-03 00:21:03.802318 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-01-03 00:21:03.881752 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-01-03 00:21:03.881861 | orchestrator | 2026-01-03 00:21:03.881885 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-03 00:21:03.881905 | orchestrator | 2026-01-03 00:21:03.881923 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-01-03 00:21:03.936249 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:21:03.936345 | orchestrator | 2026-01-03 00:21:03.936360 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-01-03 00:21:04.008743 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-01-03 00:21:04.008838 | orchestrator | 2026-01-03 00:21:04.008855 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-01-03 00:21:04.690252 | orchestrator | changed: [testbed-manager] 2026-01-03 00:21:04.690354 | orchestrator | 2026-01-03 00:21:04.690371 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-01-03 00:21:07.675086 | orchestrator | ok: [testbed-manager] 2026-01-03 00:21:07.675192 | orchestrator | 2026-01-03 00:21:07.675209 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-01-03 00:21:07.731946 | orchestrator | ok: [testbed-manager] => { 2026-01-03 00:21:07.732034 | orchestrator | "version_check_result.stdout_lines": [ 2026-01-03 00:21:07.732049 | orchestrator | "=== OSISM Container Version Check ===", 2026-01-03 00:21:07.732065 | orchestrator | "Checking running containers against expected versions...", 2026-01-03 00:21:07.732078 | orchestrator | "", 2026-01-03 00:21:07.732089 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-01-03 00:21:07.732101 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-03 00:21:07.732112 | orchestrator | " Enabled: true", 2026-01-03 00:21:07.732123 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-03 00:21:07.732134 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:21:07.732145 | orchestrator | "", 2026-01-03 00:21:07.732156 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-01-03 00:21:07.732167 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-01-03 00:21:07.732178 | orchestrator | " Enabled: true", 2026-01-03 00:21:07.732189 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-01-03 00:21:07.732202 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:21:07.732221 | orchestrator | "", 2026-01-03 00:21:07.732240 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-01-03 00:21:07.732257 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-03 00:21:07.732274 | orchestrator | " Enabled: true", 2026-01-03 00:21:07.732291 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-03 00:21:07.732308 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:21:07.732325 | orchestrator | "", 2026-01-03 00:21:07.732343 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-01-03 00:21:07.732362 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-03 00:21:07.732381 | orchestrator | " Enabled: true", 2026-01-03 00:21:07.732400 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-03 00:21:07.732418 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:21:07.732432 | orchestrator | "", 2026-01-03 00:21:07.732475 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-01-03 00:21:07.732487 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-01-03 00:21:07.732497 | orchestrator | " Enabled: true", 2026-01-03 00:21:07.732508 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-01-03 00:21:07.732519 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:21:07.732530 | orchestrator | "", 2026-01-03 00:21:07.732541 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-01-03 00:21:07.732552 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-03 00:21:07.732562 | orchestrator | " Enabled: true", 2026-01-03 00:21:07.732573 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-03 00:21:07.732584 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:21:07.732595 | orchestrator | "", 2026-01-03 00:21:07.732606 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-01-03 00:21:07.732616 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-03 00:21:07.732627 | orchestrator | " Enabled: true", 2026-01-03 00:21:07.732638 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-03 00:21:07.732649 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:21:07.732660 | orchestrator | "", 2026-01-03 00:21:07.732670 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-01-03 00:21:07.732681 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-03 00:21:07.732714 | orchestrator | " Enabled: true", 2026-01-03 00:21:07.732725 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-03 00:21:07.732745 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:21:07.732761 | orchestrator | "", 2026-01-03 00:21:07.732773 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-01-03 00:21:07.732784 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-01-03 00:21:07.732795 | orchestrator | " Enabled: true", 2026-01-03 00:21:07.732806 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-01-03 00:21:07.732817 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:21:07.732828 | orchestrator | "", 2026-01-03 00:21:07.732839 | orchestrator | "Checking service: redis (Redis Cache)", 2026-01-03 00:21:07.732850 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-03 00:21:07.732861 | orchestrator | " Enabled: true", 2026-01-03 00:21:07.732871 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-03 00:21:07.732882 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:21:07.732893 | orchestrator | "", 2026-01-03 00:21:07.732903 | orchestrator | "Checking service: api (OSISM API Service)", 2026-01-03 00:21:07.732914 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-03 00:21:07.732925 | orchestrator | " Enabled: true", 2026-01-03 00:21:07.732936 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-03 00:21:07.732946 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:21:07.732957 | orchestrator | "", 2026-01-03 00:21:07.732968 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-01-03 00:21:07.732979 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-03 00:21:07.732989 | orchestrator | " Enabled: true", 2026-01-03 00:21:07.733000 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-03 00:21:07.733010 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:21:07.733021 | orchestrator | "", 2026-01-03 00:21:07.733032 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-01-03 00:21:07.733042 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-03 00:21:07.733053 | orchestrator | " Enabled: true", 2026-01-03 00:21:07.733064 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-03 00:21:07.733074 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:21:07.733085 | orchestrator | "", 2026-01-03 00:21:07.733095 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-01-03 00:21:07.733106 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-03 00:21:07.733124 | orchestrator | " Enabled: true", 2026-01-03 00:21:07.733135 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-03 00:21:07.733146 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:21:07.733157 | orchestrator | "", 2026-01-03 00:21:07.733168 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-01-03 00:21:07.733196 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-03 00:21:07.733207 | orchestrator | " Enabled: true", 2026-01-03 00:21:07.733218 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-03 00:21:07.733228 | orchestrator | " Status: ✅ MATCH", 2026-01-03 00:21:07.733239 | orchestrator | "", 2026-01-03 00:21:07.733250 | orchestrator | "=== Summary ===", 2026-01-03 00:21:07.733261 | orchestrator | "Errors (version mismatches): 0", 2026-01-03 00:21:07.733272 | orchestrator | "Warnings (expected containers not running): 0", 2026-01-03 00:21:07.733282 | orchestrator | "", 2026-01-03 00:21:07.733293 | orchestrator | "✅ All running containers match expected versions!" 2026-01-03 00:21:07.733304 | orchestrator | ] 2026-01-03 00:21:07.733315 | orchestrator | } 2026-01-03 00:21:07.733326 | orchestrator | 2026-01-03 00:21:07.733338 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-01-03 00:21:07.791808 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:21:07.791892 | orchestrator | 2026-01-03 00:21:07.791909 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:21:07.791922 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2026-01-03 00:21:07.791934 | orchestrator | 2026-01-03 00:21:07.890268 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-03 00:21:07.890363 | orchestrator | + deactivate 2026-01-03 00:21:07.890380 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-03 00:21:07.890392 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-03 00:21:07.890402 | orchestrator | + export PATH 2026-01-03 00:21:07.890412 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-03 00:21:07.890422 | orchestrator | + '[' -n '' ']' 2026-01-03 00:21:07.890432 | orchestrator | + hash -r 2026-01-03 00:21:07.890442 | orchestrator | + '[' -n '' ']' 2026-01-03 00:21:07.890451 | orchestrator | + unset VIRTUAL_ENV 2026-01-03 00:21:07.890461 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-03 00:21:07.890471 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-03 00:21:07.890480 | orchestrator | + unset -f deactivate 2026-01-03 00:21:07.890490 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-01-03 00:21:07.899928 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-03 00:21:07.900001 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-03 00:21:07.900015 | orchestrator | + local max_attempts=60 2026-01-03 00:21:07.900026 | orchestrator | + local name=ceph-ansible 2026-01-03 00:21:07.900038 | orchestrator | + local attempt_num=1 2026-01-03 00:21:07.900672 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:21:07.934265 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:21:07.934380 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-03 00:21:07.934403 | orchestrator | + local max_attempts=60 2026-01-03 00:21:07.934422 | orchestrator | + local name=kolla-ansible 2026-01-03 00:21:07.934441 | orchestrator | + local attempt_num=1 2026-01-03 00:21:07.935010 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-03 00:21:07.977367 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:21:07.977443 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-03 00:21:07.977456 | orchestrator | + local max_attempts=60 2026-01-03 00:21:07.977468 | orchestrator | + local name=osism-ansible 2026-01-03 00:21:07.977480 | orchestrator | + local attempt_num=1 2026-01-03 00:21:07.978222 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-03 00:21:08.014374 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:21:08.014459 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-03 00:21:08.014474 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-03 00:21:08.745987 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-01-03 00:21:08.924381 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-01-03 00:21:08.924514 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-01-03 00:21:08.924532 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-01-03 00:21:08.924544 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-01-03 00:21:08.924556 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-01-03 00:21:08.924567 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-01-03 00:21:08.924578 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-01-03 00:21:08.924607 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 57 seconds (healthy) 2026-01-03 00:21:08.924618 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-01-03 00:21:08.924629 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-01-03 00:21:08.924640 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-01-03 00:21:08.924651 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-01-03 00:21:08.924662 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-01-03 00:21:08.924672 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-01-03 00:21:08.924716 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-01-03 00:21:08.924729 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-01-03 00:21:08.931271 | orchestrator | ++ semver latest 7.0.0 2026-01-03 00:21:08.997443 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-03 00:21:08.997514 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-03 00:21:08.997523 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-01-03 00:21:09.002489 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-01-03 00:21:21.220269 | orchestrator | 2026-01-03 00:21:21 | INFO  | Task 38a9736f-cb7f-4644-a09b-23f02e20911e (resolvconf) was prepared for execution. 2026-01-03 00:21:21.220411 | orchestrator | 2026-01-03 00:21:21 | INFO  | It takes a moment until task 38a9736f-cb7f-4644-a09b-23f02e20911e (resolvconf) has been started and output is visible here. 2026-01-03 00:21:34.868104 | orchestrator | 2026-01-03 00:21:34.868231 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-01-03 00:21:34.868249 | orchestrator | 2026-01-03 00:21:34.868262 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-03 00:21:34.868274 | orchestrator | Saturday 03 January 2026 00:21:25 +0000 (0:00:00.136) 0:00:00.136 ****** 2026-01-03 00:21:34.869134 | orchestrator | ok: [testbed-manager] 2026-01-03 00:21:34.869157 | orchestrator | 2026-01-03 00:21:34.869171 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-03 00:21:34.869186 | orchestrator | Saturday 03 January 2026 00:21:29 +0000 (0:00:03.775) 0:00:03.912 ****** 2026-01-03 00:21:34.869199 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:21:34.869210 | orchestrator | 2026-01-03 00:21:34.869221 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-03 00:21:34.869233 | orchestrator | Saturday 03 January 2026 00:21:29 +0000 (0:00:00.063) 0:00:03.975 ****** 2026-01-03 00:21:34.869244 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-01-03 00:21:34.869257 | orchestrator | 2026-01-03 00:21:34.869268 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-03 00:21:34.869279 | orchestrator | Saturday 03 January 2026 00:21:29 +0000 (0:00:00.082) 0:00:04.058 ****** 2026-01-03 00:21:34.869302 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-01-03 00:21:34.869314 | orchestrator | 2026-01-03 00:21:34.869325 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-03 00:21:34.869336 | orchestrator | Saturday 03 January 2026 00:21:29 +0000 (0:00:00.093) 0:00:04.152 ****** 2026-01-03 00:21:34.869347 | orchestrator | ok: [testbed-manager] 2026-01-03 00:21:34.869358 | orchestrator | 2026-01-03 00:21:34.869369 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-03 00:21:34.869380 | orchestrator | Saturday 03 January 2026 00:21:30 +0000 (0:00:01.111) 0:00:05.263 ****** 2026-01-03 00:21:34.869391 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:21:34.869402 | orchestrator | 2026-01-03 00:21:34.869413 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-03 00:21:34.869424 | orchestrator | Saturday 03 January 2026 00:21:30 +0000 (0:00:00.062) 0:00:05.326 ****** 2026-01-03 00:21:34.869435 | orchestrator | ok: [testbed-manager] 2026-01-03 00:21:34.869445 | orchestrator | 2026-01-03 00:21:34.869456 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-03 00:21:34.869467 | orchestrator | Saturday 03 January 2026 00:21:31 +0000 (0:00:00.521) 0:00:05.847 ****** 2026-01-03 00:21:34.869478 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:21:34.869489 | orchestrator | 2026-01-03 00:21:34.869500 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-03 00:21:34.869512 | orchestrator | Saturday 03 January 2026 00:21:31 +0000 (0:00:00.086) 0:00:05.934 ****** 2026-01-03 00:21:34.869523 | orchestrator | changed: [testbed-manager] 2026-01-03 00:21:34.869534 | orchestrator | 2026-01-03 00:21:34.869545 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-03 00:21:34.869555 | orchestrator | Saturday 03 January 2026 00:21:31 +0000 (0:00:00.522) 0:00:06.457 ****** 2026-01-03 00:21:34.869567 | orchestrator | changed: [testbed-manager] 2026-01-03 00:21:34.869578 | orchestrator | 2026-01-03 00:21:34.869588 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-03 00:21:34.869599 | orchestrator | Saturday 03 January 2026 00:21:32 +0000 (0:00:01.056) 0:00:07.513 ****** 2026-01-03 00:21:34.869610 | orchestrator | ok: [testbed-manager] 2026-01-03 00:21:34.869651 | orchestrator | 2026-01-03 00:21:34.869663 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-03 00:21:34.869703 | orchestrator | Saturday 03 January 2026 00:21:33 +0000 (0:00:00.865) 0:00:08.379 ****** 2026-01-03 00:21:34.869716 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-01-03 00:21:34.869727 | orchestrator | 2026-01-03 00:21:34.869738 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-03 00:21:34.869748 | orchestrator | Saturday 03 January 2026 00:21:33 +0000 (0:00:00.076) 0:00:08.455 ****** 2026-01-03 00:21:34.869759 | orchestrator | changed: [testbed-manager] 2026-01-03 00:21:34.869770 | orchestrator | 2026-01-03 00:21:34.869781 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:21:34.869793 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-03 00:21:34.869804 | orchestrator | 2026-01-03 00:21:34.869815 | orchestrator | 2026-01-03 00:21:34.869826 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:21:34.869836 | orchestrator | Saturday 03 January 2026 00:21:34 +0000 (0:00:01.047) 0:00:09.503 ****** 2026-01-03 00:21:34.869847 | orchestrator | =============================================================================== 2026-01-03 00:21:34.869858 | orchestrator | Gathering Facts --------------------------------------------------------- 3.78s 2026-01-03 00:21:34.869868 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.11s 2026-01-03 00:21:34.869879 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.06s 2026-01-03 00:21:34.869890 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.05s 2026-01-03 00:21:34.869900 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.87s 2026-01-03 00:21:34.869911 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.52s 2026-01-03 00:21:34.869941 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.52s 2026-01-03 00:21:34.869953 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-01-03 00:21:34.869964 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-01-03 00:21:34.869974 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-01-03 00:21:34.869985 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-01-03 00:21:34.869996 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-01-03 00:21:34.870006 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-01-03 00:21:35.066853 | orchestrator | + osism apply sshconfig 2026-01-03 00:21:47.061864 | orchestrator | 2026-01-03 00:21:47 | INFO  | Task b0aee4d1-4e48-4b95-879b-1dc657799ab5 (sshconfig) was prepared for execution. 2026-01-03 00:21:47.061975 | orchestrator | 2026-01-03 00:21:47 | INFO  | It takes a moment until task b0aee4d1-4e48-4b95-879b-1dc657799ab5 (sshconfig) has been started and output is visible here. 2026-01-03 00:21:58.025330 | orchestrator | 2026-01-03 00:21:58.025445 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-01-03 00:21:58.025463 | orchestrator | 2026-01-03 00:21:58.025477 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-01-03 00:21:58.025488 | orchestrator | Saturday 03 January 2026 00:21:51 +0000 (0:00:00.151) 0:00:00.151 ****** 2026-01-03 00:21:58.025500 | orchestrator | ok: [testbed-manager] 2026-01-03 00:21:58.025513 | orchestrator | 2026-01-03 00:21:58.025524 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-01-03 00:21:58.025535 | orchestrator | Saturday 03 January 2026 00:21:51 +0000 (0:00:00.508) 0:00:00.660 ****** 2026-01-03 00:21:58.025575 | orchestrator | changed: [testbed-manager] 2026-01-03 00:21:58.025587 | orchestrator | 2026-01-03 00:21:58.025598 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-01-03 00:21:58.025609 | orchestrator | Saturday 03 January 2026 00:21:52 +0000 (0:00:00.516) 0:00:01.176 ****** 2026-01-03 00:21:58.025620 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-01-03 00:21:58.025638 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-01-03 00:21:58.025657 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-01-03 00:21:58.025674 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-01-03 00:21:58.025752 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-01-03 00:21:58.025773 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-01-03 00:21:58.025791 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-01-03 00:21:58.025810 | orchestrator | 2026-01-03 00:21:58.025830 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-01-03 00:21:58.025848 | orchestrator | Saturday 03 January 2026 00:21:57 +0000 (0:00:05.162) 0:00:06.338 ****** 2026-01-03 00:21:58.025867 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:21:58.025880 | orchestrator | 2026-01-03 00:21:58.025895 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-01-03 00:21:58.025913 | orchestrator | Saturday 03 January 2026 00:21:57 +0000 (0:00:00.073) 0:00:06.412 ****** 2026-01-03 00:21:58.025932 | orchestrator | changed: [testbed-manager] 2026-01-03 00:21:58.025951 | orchestrator | 2026-01-03 00:21:58.025971 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:21:58.025993 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:21:58.026012 | orchestrator | 2026-01-03 00:21:58.026105 | orchestrator | 2026-01-03 00:21:58.026126 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:21:58.026144 | orchestrator | Saturday 03 January 2026 00:21:57 +0000 (0:00:00.507) 0:00:06.920 ****** 2026-01-03 00:21:58.026161 | orchestrator | =============================================================================== 2026-01-03 00:21:58.026178 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.16s 2026-01-03 00:21:58.026196 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.52s 2026-01-03 00:21:58.026212 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.51s 2026-01-03 00:21:58.026228 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.51s 2026-01-03 00:21:58.026246 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-01-03 00:21:58.210319 | orchestrator | + osism apply known-hosts 2026-01-03 00:22:10.139046 | orchestrator | 2026-01-03 00:22:10 | INFO  | Task a09789a5-b86e-4eab-9473-279a1faad53b (known-hosts) was prepared for execution. 2026-01-03 00:22:10.139161 | orchestrator | 2026-01-03 00:22:10 | INFO  | It takes a moment until task a09789a5-b86e-4eab-9473-279a1faad53b (known-hosts) has been started and output is visible here. 2026-01-03 00:22:26.745379 | orchestrator | 2026-01-03 00:22:26.745473 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-01-03 00:22:26.745484 | orchestrator | 2026-01-03 00:22:26.745492 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-01-03 00:22:26.745500 | orchestrator | Saturday 03 January 2026 00:22:14 +0000 (0:00:00.163) 0:00:00.163 ****** 2026-01-03 00:22:26.745508 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-03 00:22:26.745516 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-03 00:22:26.745524 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-03 00:22:26.745531 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-03 00:22:26.745538 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-03 00:22:26.745569 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-03 00:22:26.745577 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-03 00:22:26.745583 | orchestrator | 2026-01-03 00:22:26.745591 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-01-03 00:22:26.745599 | orchestrator | Saturday 03 January 2026 00:22:20 +0000 (0:00:05.929) 0:00:06.092 ****** 2026-01-03 00:22:26.745607 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-03 00:22:26.745624 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-03 00:22:26.745632 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-03 00:22:26.745639 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-03 00:22:26.745646 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-03 00:22:26.745653 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-03 00:22:26.745660 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-03 00:22:26.745667 | orchestrator | 2026-01-03 00:22:26.745674 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:22:26.745681 | orchestrator | Saturday 03 January 2026 00:22:20 +0000 (0:00:00.153) 0:00:06.246 ****** 2026-01-03 00:22:26.745688 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMVklVXMV9rEiTSWUky2VBGhJjUCDsx4++qqgUecKDh9) 2026-01-03 00:22:26.745699 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8sM7YonbwMubuY7z+/pjV9obVLzurOXWlFRHA0l3s8f+LigKv+zLBCnV1BDIpvtuAIkUR9X6VjqErVh1/VlR02hBYoJx5B0PuI8n0hNvB5X3IGsD21A9PS5LUruKB8nC/cjTUKJFyEKsXbJnNCbWfgwQWCharbuoCqgdEt02TzQtpQdzjoBWnkCPzNfWE3aGkRXP1Sf+6+oo5L8Tvga7DDBC4oemxb76LMdOso7uO0lI1XjeZM+pEBZLgHRLjvvJ6c6EtsQ65T41GsvAkP/9SFrIRdt9IZbyEVRYTYze0wFyOanhnudp+/vm/U+bA0n6XKzm63bx7qG1PcIC7FGUPqh5DN1839gz9FT46K8Nl7JWnYzUTjxMSJI/C2lC+3AYzHpPkue138DbpLsPGgbGx5p2RhGa4X731vusshy7jG6wudzI4Mk217f/qkibBC2dfz7c9oSqQwbVw2+b+gYkwhcMtF2RxJz62p5SJyZcjwZX8nQepHpzbgWT8K6T5OwM=) 2026-01-03 00:22:26.745708 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHyDD4nyywcNWE1MpejWGmRD2FHlWeC/sirY9mLDaw9S5fdTxKdnBcWt9w6V88uPWNuzihHRAuEgR17ZM6wt4v0=) 2026-01-03 00:22:26.745716 | orchestrator | 2026-01-03 00:22:26.745723 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:22:26.745730 | orchestrator | Saturday 03 January 2026 00:22:21 +0000 (0:00:01.158) 0:00:07.404 ****** 2026-01-03 00:22:26.745737 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFyUONwivS9cQ38kwpll2BJ6EbHEhrH3rZ+RNaSPY1y1) 2026-01-03 00:22:26.745830 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoWrrSH5fLgBptoEY5hTzJ2TrEEplQ65K61bY6VJwB/5Ksd3muC5vzbwkPR0M0fvoZdfHMXuE0UwBfB7qNHEk/vM5WWpISf1K5fX0d/XDxPYzvy94FERYP0QaPDjcRkLf1HcbkYtmOSBr1t7l6CL+9Rm83colDiBiWx5Ne5cKHmHDZaB9g1mgGM27dp3tfpczNAmk4Sxevaf4hbhnRzarmOnaASOajohxpgRUlnlPvAWmwbcsOGDQB8E5/j5rk8goZCuP9nTfRzg3X4lanKll/ZUOFpkuq8/wKdN5/ued6NGlvMFx7zNUlRwnjgchXSq3Y1+XJnjLji7lExg6YLWqTl2Rf4/cdEjvJjoxxOeAw0WVVShgCGVvp3ISYt9Y6Gx3nmtuDxdIG82fsW3eX/Y2tT6nPGJptPQBtgHAN6qAgRJAh6jCcDWILFNIEhtYo1A5/jKuGS9o6H/x+ao41YTvJrTak2U/LHBWBU9T4xcHJhziaE43lyHDNffw6UNR/6d8=) 2026-01-03 00:22:26.745847 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKHoVPFaKFSg7oEYHcRb29M8hqQ6zAGCY8pnpeDaYg2B7d3Ss3n2k+3wHO+ntaCaIDeruFNAyvr6ZCjDEaXB/m4=) 2026-01-03 00:22:26.745855 | orchestrator | 2026-01-03 00:22:26.745862 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:22:26.745869 | orchestrator | Saturday 03 January 2026 00:22:22 +0000 (0:00:01.107) 0:00:08.512 ****** 2026-01-03 00:22:26.745877 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPuLDTCB4RgR9c2PPDRMZgyXeWS0gy4rfT1IdZ+kAYcGoPGySYhshYgNjYPdE8a9CDAA85RHv1iTbHws30KOTYtrMbH+OVL0Cixi9dXaC01bjI+ZIzYwZB6v/ma6e/lb792MVsBlz6QYhGKgoLcrfF9DYq/ASPfFKlDyoLBf67+zzRGFFzZI4lGegjp7Xe4QUjW2MzDtP5bkQnJY9hYdxhd7DOigcLfa8Y8Vx91RTed+CQuE8P2Jy3/YJxYdQ4oATFTm3selyYVN0rZBY5GY6onTzG90Pp2xANArJg02VYJiv8ipWKQ8hONlAQ/hwttjZWNVUg46n55cLxql6O2ZFdU/fix8TnvJkKWm/R20a6PMUSGCovCbbEG0OeeIzaPOLDXwDgjZhA6OYWxQ6HjMSeJGYNxeH7ikOFlgtLVamlpXC3BVlBTp07OudByCmqIk3SnSIiXbTSEMWh4W2F8dSTC3xiDsk+276sJ3RsOsCpgcT231rGKFQc6ufYhCZ1zas=) 2026-01-03 00:22:26.745884 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILQfDLaoT+QMqKOuuZOQrlnyg9vEW74HuojQLLv0N4gF) 2026-01-03 00:22:26.745892 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK7xjwOOYdo+wIseHEXCaWQ0M1KdM7qGyAGIb1v6jk+zFECl2eGpd0UDu0HNkRT3qg9SVkwDPDPSe+VyV+tLrME=) 2026-01-03 00:22:26.745899 | orchestrator | 2026-01-03 00:22:26.745906 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:22:26.745962 | orchestrator | Saturday 03 January 2026 00:22:23 +0000 (0:00:01.057) 0:00:09.569 ****** 2026-01-03 00:22:26.745973 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD2VMdZXYUlgoEnUJgS4Waf/QIJ/RmQ8+B0P/DtFNKgO3q7wyv3v/RYfxyzd+l33k6buKN0mIZ21Pvv3hIjuoefmcdC5Fi2nqMu6vMwLdiRVPowVgkimtIi+jAZGqz2LwK/9S9I3kkywd1ars4EdplU1oixP0aVJFUrjUvS7QBfUDXfhqRPV8FymDK4kYdbCCdYebUCaueMRwvsfzaJ8N9YRaPlK6tE+Q9oXMUfezLWN6I/fydfosMJcsiozhLIM3mPO30B5T2SdSkiU74JeBErqqvUqSiZIPviz9sd3AWvXZ405Hbu2ITI6sosRM6EbasJw399bN1Gnu/Uw5mYC3w3NJ4KFoLO/x3O2bgCsHy3e/7D4KjbwBUnsw2rEscSivEM+XpxgqJuojyvbqv1R6lWgLvIb37UK6oprqCtODdGVFhCUXONzZcB7u8C351Uyf0G9cyPp6dL3b86ma8qcki6oagOPXes6kgS0OTAPwminGyKKYd1Ybebq669pNst0bc=) 2026-01-03 00:22:26.745980 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFmbCYcylXUEszcuqAVdDA56Br2HWK1smQmNFdAYSORb) 2026-01-03 00:22:26.745987 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIwjchHUnlAlSI0mT/xNbzun8386yf4kAtWgcKc80IYdSf0NniiTkjhasSBVPS6YpkB0PznZJXvNAabAkjA7Ips=) 2026-01-03 00:22:26.745994 | orchestrator | 2026-01-03 00:22:26.746001 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:22:26.746008 | orchestrator | Saturday 03 January 2026 00:22:24 +0000 (0:00:01.006) 0:00:10.576 ****** 2026-01-03 00:22:26.746057 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkTlO68LmqvVFzvaFYY+tdiSodKizsjdmhT65TAkTdLDpQozy9B8VHhtOCipgL3YH30wh9v8xwMHZEz1hRUeHXThp38ka9AZ6uBHNfCtpVIkvO3DMnyfShtizFFlRhrOhCNReFrU82KyFkXqBLmyxOdQPkZmgCV4TilYfjBOcyGljf/91mpPmq04Y3TyqebLDolC2aE1WcON8tU+TFp3UZ/g3Onszb7I2BCuT1+MNqe6+Zo9RhsupIhxKO6whCuIHD8jxQjo5J0KrUQMcYmr5CKP74z3BdaOxvMdFakMg2Ew9wY1BPJtXM7to9KbnSoVi/H56onFShKbbpmwgWCmTYSRycvMD0Rw1lNqUgXHCeOUnuxLIaxadaV8pZx+b98QVSiG+bR0PdYM0LuUhpuh434sbunX7P0Cg/GC6mvj81EsgpTnvCwg5v4vcG5cF4BcUUMRTYO2lJ49dTa+zP3D/Rv+fMFP/FWl1xyA4UVeHZNF/2bq1ROvVok0ezc7FzHbc=) 2026-01-03 00:22:26.746073 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDWCdP6Xi/aDmx36RNfGgNS4mGEgAL/FkeYBsW9xeLOP) 2026-01-03 00:22:26.746080 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI/EtiBsokadU/MacX4sjGWMy7AETTj3r5pSgHUZrMDiaR4zDKeEd+4/WpAXEUuDng60jEPIrc17dKyHBFNtS6M=) 2026-01-03 00:22:26.746087 | orchestrator | 2026-01-03 00:22:26.746094 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:22:26.746102 | orchestrator | Saturday 03 January 2026 00:22:25 +0000 (0:00:01.030) 0:00:11.607 ****** 2026-01-03 00:22:26.746117 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCP4y5O+GjVa04JfGAAjtcaOMXRyc2xkYmwbZLbmLGN6fGkDKuSCrY940jPY49ckRA9W2kAxyNErl5I78+NJjbQgPn/N5V+6+M4sdsjB4fuchr+6a7nLLpMWo/yjdXY5jRPXX5uxmAAsxgTvSISgKjQ1woE+l8e+RDvktH9mDG8hBxErVZ1pep34WFwAEo5Ffmfx+zJGv54bPan126r24u48GKwJqi1ce9o7zSocrNNxbKqVzZ6r/16TKlTxiVpcbfNKkc9M3R5VSXvpDnUD5xUuqqf6uG4MDvkIReD7tZeWytlYYKp9VQ5vir8ARyht8W03CV6/E90T7pPaorVcbodPgOKv7uGRG09L+OSZpRrDOeIkZs0qQn5hnDkqVfnWMEOKj+gAsbqYcTCjZQhc55fJPZNpx/BsUOrYUqtQH4em1bivit++TYT4UZQYIEMBPJbiX6EAqt7Lk5pj3DU92Aqyqj4v0jOxXejZlFAZ8VkRMUM+S0hWaB0QEz6LupimTc=) 2026-01-03 00:22:37.367601 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBETj6Gp48ck7s2OJrScXKDBIvWPUUoJS3F/S1pPwGxIsdPc0Y75RgYvGoMxQVLyiEwRw8KSZwjsUA/gT6qYCLzI=) 2026-01-03 00:22:37.367731 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHc9coC/eX0OeKvJXxK/wm9UtiDmvsCXnz6iMrS9ta1O) 2026-01-03 00:22:37.367751 | orchestrator | 2026-01-03 00:22:37.367855 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:22:37.367874 | orchestrator | Saturday 03 January 2026 00:22:26 +0000 (0:00:01.046) 0:00:12.653 ****** 2026-01-03 00:22:37.367894 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCk/1ffiwR95i6HdH3imDOqfi4kUNq5t7GM3XhM4UBdsnbwq7VOc5DVkeKiGz7ftUP1n+OBh84HRhWCuIugxojiGVVW869DhMRUxwjcKl9hmsuPhmMMTncqvqmKhXrJJsqSUtK6F/s/h+OZg6Jk2c4WCV9OBWWzkXh2k5daf2zQfnl96R2EA/Lnr9NJaqjgAIIo+hjK1DR+gf7AsUkh1r2fYDa8p2QP5ugY0xlq6GtGSDMJwwUNdvM0D6MWO73UMJyQ9YW01LsJpUB41+LmEIM4G4kqfU43PT8sGYjZfrpuk66zaCNS2Ogc827gs89AqdV0ThZyzA2DXJvf58YX83LKG66T8Q/zYq4piH4VgR5Udtcfzh8Ov1CsEmc5As+y8NynhzGbiW2guECqMjldNOH9t2DTVsKmRyMGiGF/XP67TgnwLrVqpWicHp8lThocZMZddtHMwEzTKpZHdKi8eHXg5aLUxO+1YeHt94PKyTrwpP9JUWFmdi5y/x05yZGEYBc=) 2026-01-03 00:22:37.367915 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGNk6i5+T76rwLuiBz2SUrAkptYtwn+NrU2J1mOMDYyF03SCJmpc9lyo3kVmMV2XbhIoRhl7hs+KJKcKZNtFqjo=) 2026-01-03 00:22:37.367928 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICVru5o4QsnFJEhJ4EBxfIK0vyUZ+C6TzIWpOaNLVoqb) 2026-01-03 00:22:37.367939 | orchestrator | 2026-01-03 00:22:37.367951 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-01-03 00:22:37.367963 | orchestrator | Saturday 03 January 2026 00:22:27 +0000 (0:00:01.019) 0:00:13.673 ****** 2026-01-03 00:22:37.367974 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-03 00:22:37.367986 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-03 00:22:37.367998 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-03 00:22:37.368009 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-03 00:22:37.368020 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-03 00:22:37.368032 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-03 00:22:37.368050 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-03 00:22:37.368069 | orchestrator | 2026-01-03 00:22:37.368111 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-01-03 00:22:37.368154 | orchestrator | Saturday 03 January 2026 00:22:32 +0000 (0:00:05.200) 0:00:18.874 ****** 2026-01-03 00:22:37.368167 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-03 00:22:37.368179 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-03 00:22:37.368190 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-03 00:22:37.368201 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-03 00:22:37.368212 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-03 00:22:37.368222 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-03 00:22:37.368233 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-03 00:22:37.368244 | orchestrator | 2026-01-03 00:22:37.368263 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:22:37.368281 | orchestrator | Saturday 03 January 2026 00:22:33 +0000 (0:00:00.169) 0:00:19.043 ****** 2026-01-03 00:22:37.368299 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMVklVXMV9rEiTSWUky2VBGhJjUCDsx4++qqgUecKDh9) 2026-01-03 00:22:37.368350 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8sM7YonbwMubuY7z+/pjV9obVLzurOXWlFRHA0l3s8f+LigKv+zLBCnV1BDIpvtuAIkUR9X6VjqErVh1/VlR02hBYoJx5B0PuI8n0hNvB5X3IGsD21A9PS5LUruKB8nC/cjTUKJFyEKsXbJnNCbWfgwQWCharbuoCqgdEt02TzQtpQdzjoBWnkCPzNfWE3aGkRXP1Sf+6+oo5L8Tvga7DDBC4oemxb76LMdOso7uO0lI1XjeZM+pEBZLgHRLjvvJ6c6EtsQ65T41GsvAkP/9SFrIRdt9IZbyEVRYTYze0wFyOanhnudp+/vm/U+bA0n6XKzm63bx7qG1PcIC7FGUPqh5DN1839gz9FT46K8Nl7JWnYzUTjxMSJI/C2lC+3AYzHpPkue138DbpLsPGgbGx5p2RhGa4X731vusshy7jG6wudzI4Mk217f/qkibBC2dfz7c9oSqQwbVw2+b+gYkwhcMtF2RxJz62p5SJyZcjwZX8nQepHpzbgWT8K6T5OwM=) 2026-01-03 00:22:37.368370 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHyDD4nyywcNWE1MpejWGmRD2FHlWeC/sirY9mLDaw9S5fdTxKdnBcWt9w6V88uPWNuzihHRAuEgR17ZM6wt4v0=) 2026-01-03 00:22:37.368388 | orchestrator | 2026-01-03 00:22:37.368406 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:22:37.368419 | orchestrator | Saturday 03 January 2026 00:22:34 +0000 (0:00:01.076) 0:00:20.120 ****** 2026-01-03 00:22:37.368430 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKHoVPFaKFSg7oEYHcRb29M8hqQ6zAGCY8pnpeDaYg2B7d3Ss3n2k+3wHO+ntaCaIDeruFNAyvr6ZCjDEaXB/m4=) 2026-01-03 00:22:37.368442 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoWrrSH5fLgBptoEY5hTzJ2TrEEplQ65K61bY6VJwB/5Ksd3muC5vzbwkPR0M0fvoZdfHMXuE0UwBfB7qNHEk/vM5WWpISf1K5fX0d/XDxPYzvy94FERYP0QaPDjcRkLf1HcbkYtmOSBr1t7l6CL+9Rm83colDiBiWx5Ne5cKHmHDZaB9g1mgGM27dp3tfpczNAmk4Sxevaf4hbhnRzarmOnaASOajohxpgRUlnlPvAWmwbcsOGDQB8E5/j5rk8goZCuP9nTfRzg3X4lanKll/ZUOFpkuq8/wKdN5/ued6NGlvMFx7zNUlRwnjgchXSq3Y1+XJnjLji7lExg6YLWqTl2Rf4/cdEjvJjoxxOeAw0WVVShgCGVvp3ISYt9Y6Gx3nmtuDxdIG82fsW3eX/Y2tT6nPGJptPQBtgHAN6qAgRJAh6jCcDWILFNIEhtYo1A5/jKuGS9o6H/x+ao41YTvJrTak2U/LHBWBU9T4xcHJhziaE43lyHDNffw6UNR/6d8=) 2026-01-03 00:22:37.368462 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFyUONwivS9cQ38kwpll2BJ6EbHEhrH3rZ+RNaSPY1y1) 2026-01-03 00:22:37.368473 | orchestrator | 2026-01-03 00:22:37.368484 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:22:37.368495 | orchestrator | Saturday 03 January 2026 00:22:35 +0000 (0:00:01.066) 0:00:21.186 ****** 2026-01-03 00:22:37.368506 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILQfDLaoT+QMqKOuuZOQrlnyg9vEW74HuojQLLv0N4gF) 2026-01-03 00:22:37.368518 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPuLDTCB4RgR9c2PPDRMZgyXeWS0gy4rfT1IdZ+kAYcGoPGySYhshYgNjYPdE8a9CDAA85RHv1iTbHws30KOTYtrMbH+OVL0Cixi9dXaC01bjI+ZIzYwZB6v/ma6e/lb792MVsBlz6QYhGKgoLcrfF9DYq/ASPfFKlDyoLBf67+zzRGFFzZI4lGegjp7Xe4QUjW2MzDtP5bkQnJY9hYdxhd7DOigcLfa8Y8Vx91RTed+CQuE8P2Jy3/YJxYdQ4oATFTm3selyYVN0rZBY5GY6onTzG90Pp2xANArJg02VYJiv8ipWKQ8hONlAQ/hwttjZWNVUg46n55cLxql6O2ZFdU/fix8TnvJkKWm/R20a6PMUSGCovCbbEG0OeeIzaPOLDXwDgjZhA6OYWxQ6HjMSeJGYNxeH7ikOFlgtLVamlpXC3BVlBTp07OudByCmqIk3SnSIiXbTSEMWh4W2F8dSTC3xiDsk+276sJ3RsOsCpgcT231rGKFQc6ufYhCZ1zas=) 2026-01-03 00:22:37.368530 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK7xjwOOYdo+wIseHEXCaWQ0M1KdM7qGyAGIb1v6jk+zFECl2eGpd0UDu0HNkRT3qg9SVkwDPDPSe+VyV+tLrME=) 2026-01-03 00:22:37.368541 | orchestrator | 2026-01-03 00:22:37.368551 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:22:37.368562 | orchestrator | Saturday 03 January 2026 00:22:36 +0000 (0:00:01.039) 0:00:22.225 ****** 2026-01-03 00:22:37.368573 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFmbCYcylXUEszcuqAVdDA56Br2HWK1smQmNFdAYSORb) 2026-01-03 00:22:37.368590 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD2VMdZXYUlgoEnUJgS4Waf/QIJ/RmQ8+B0P/DtFNKgO3q7wyv3v/RYfxyzd+l33k6buKN0mIZ21Pvv3hIjuoefmcdC5Fi2nqMu6vMwLdiRVPowVgkimtIi+jAZGqz2LwK/9S9I3kkywd1ars4EdplU1oixP0aVJFUrjUvS7QBfUDXfhqRPV8FymDK4kYdbCCdYebUCaueMRwvsfzaJ8N9YRaPlK6tE+Q9oXMUfezLWN6I/fydfosMJcsiozhLIM3mPO30B5T2SdSkiU74JeBErqqvUqSiZIPviz9sd3AWvXZ405Hbu2ITI6sosRM6EbasJw399bN1Gnu/Uw5mYC3w3NJ4KFoLO/x3O2bgCsHy3e/7D4KjbwBUnsw2rEscSivEM+XpxgqJuojyvbqv1R6lWgLvIb37UK6oprqCtODdGVFhCUXONzZcB7u8C351Uyf0G9cyPp6dL3b86ma8qcki6oagOPXes6kgS0OTAPwminGyKKYd1Ybebq669pNst0bc=) 2026-01-03 00:22:37.368613 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIwjchHUnlAlSI0mT/xNbzun8386yf4kAtWgcKc80IYdSf0NniiTkjhasSBVPS6YpkB0PznZJXvNAabAkjA7Ips=) 2026-01-03 00:22:41.678986 | orchestrator | 2026-01-03 00:22:41.679088 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:22:41.679103 | orchestrator | Saturday 03 January 2026 00:22:37 +0000 (0:00:01.048) 0:00:23.273 ****** 2026-01-03 00:22:41.679117 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkTlO68LmqvVFzvaFYY+tdiSodKizsjdmhT65TAkTdLDpQozy9B8VHhtOCipgL3YH30wh9v8xwMHZEz1hRUeHXThp38ka9AZ6uBHNfCtpVIkvO3DMnyfShtizFFlRhrOhCNReFrU82KyFkXqBLmyxOdQPkZmgCV4TilYfjBOcyGljf/91mpPmq04Y3TyqebLDolC2aE1WcON8tU+TFp3UZ/g3Onszb7I2BCuT1+MNqe6+Zo9RhsupIhxKO6whCuIHD8jxQjo5J0KrUQMcYmr5CKP74z3BdaOxvMdFakMg2Ew9wY1BPJtXM7to9KbnSoVi/H56onFShKbbpmwgWCmTYSRycvMD0Rw1lNqUgXHCeOUnuxLIaxadaV8pZx+b98QVSiG+bR0PdYM0LuUhpuh434sbunX7P0Cg/GC6mvj81EsgpTnvCwg5v4vcG5cF4BcUUMRTYO2lJ49dTa+zP3D/Rv+fMFP/FWl1xyA4UVeHZNF/2bq1ROvVok0ezc7FzHbc=) 2026-01-03 00:22:41.679130 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDWCdP6Xi/aDmx36RNfGgNS4mGEgAL/FkeYBsW9xeLOP) 2026-01-03 00:22:41.679142 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI/EtiBsokadU/MacX4sjGWMy7AETTj3r5pSgHUZrMDiaR4zDKeEd+4/WpAXEUuDng60jEPIrc17dKyHBFNtS6M=) 2026-01-03 00:22:41.679176 | orchestrator | 2026-01-03 00:22:41.679189 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:22:41.679206 | orchestrator | Saturday 03 January 2026 00:22:38 +0000 (0:00:01.061) 0:00:24.335 ****** 2026-01-03 00:22:41.679243 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCP4y5O+GjVa04JfGAAjtcaOMXRyc2xkYmwbZLbmLGN6fGkDKuSCrY940jPY49ckRA9W2kAxyNErl5I78+NJjbQgPn/N5V+6+M4sdsjB4fuchr+6a7nLLpMWo/yjdXY5jRPXX5uxmAAsxgTvSISgKjQ1woE+l8e+RDvktH9mDG8hBxErVZ1pep34WFwAEo5Ffmfx+zJGv54bPan126r24u48GKwJqi1ce9o7zSocrNNxbKqVzZ6r/16TKlTxiVpcbfNKkc9M3R5VSXvpDnUD5xUuqqf6uG4MDvkIReD7tZeWytlYYKp9VQ5vir8ARyht8W03CV6/E90T7pPaorVcbodPgOKv7uGRG09L+OSZpRrDOeIkZs0qQn5hnDkqVfnWMEOKj+gAsbqYcTCjZQhc55fJPZNpx/BsUOrYUqtQH4em1bivit++TYT4UZQYIEMBPJbiX6EAqt7Lk5pj3DU92Aqyqj4v0jOxXejZlFAZ8VkRMUM+S0hWaB0QEz6LupimTc=) 2026-01-03 00:22:41.679263 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBETj6Gp48ck7s2OJrScXKDBIvWPUUoJS3F/S1pPwGxIsdPc0Y75RgYvGoMxQVLyiEwRw8KSZwjsUA/gT6qYCLzI=) 2026-01-03 00:22:41.679274 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHc9coC/eX0OeKvJXxK/wm9UtiDmvsCXnz6iMrS9ta1O) 2026-01-03 00:22:41.679284 | orchestrator | 2026-01-03 00:22:41.679293 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-03 00:22:41.679303 | orchestrator | Saturday 03 January 2026 00:22:39 +0000 (0:00:01.040) 0:00:25.376 ****** 2026-01-03 00:22:41.679313 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGNk6i5+T76rwLuiBz2SUrAkptYtwn+NrU2J1mOMDYyF03SCJmpc9lyo3kVmMV2XbhIoRhl7hs+KJKcKZNtFqjo=) 2026-01-03 00:22:41.679323 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCk/1ffiwR95i6HdH3imDOqfi4kUNq5t7GM3XhM4UBdsnbwq7VOc5DVkeKiGz7ftUP1n+OBh84HRhWCuIugxojiGVVW869DhMRUxwjcKl9hmsuPhmMMTncqvqmKhXrJJsqSUtK6F/s/h+OZg6Jk2c4WCV9OBWWzkXh2k5daf2zQfnl96R2EA/Lnr9NJaqjgAIIo+hjK1DR+gf7AsUkh1r2fYDa8p2QP5ugY0xlq6GtGSDMJwwUNdvM0D6MWO73UMJyQ9YW01LsJpUB41+LmEIM4G4kqfU43PT8sGYjZfrpuk66zaCNS2Ogc827gs89AqdV0ThZyzA2DXJvf58YX83LKG66T8Q/zYq4piH4VgR5Udtcfzh8Ov1CsEmc5As+y8NynhzGbiW2guECqMjldNOH9t2DTVsKmRyMGiGF/XP67TgnwLrVqpWicHp8lThocZMZddtHMwEzTKpZHdKi8eHXg5aLUxO+1YeHt94PKyTrwpP9JUWFmdi5y/x05yZGEYBc=) 2026-01-03 00:22:41.679333 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICVru5o4QsnFJEhJ4EBxfIK0vyUZ+C6TzIWpOaNLVoqb) 2026-01-03 00:22:41.679343 | orchestrator | 2026-01-03 00:22:41.679353 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-01-03 00:22:41.679363 | orchestrator | Saturday 03 January 2026 00:22:40 +0000 (0:00:01.050) 0:00:26.426 ****** 2026-01-03 00:22:41.679373 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-03 00:22:41.679383 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-03 00:22:41.679393 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-03 00:22:41.679403 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-03 00:22:41.679412 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-03 00:22:41.679422 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-03 00:22:41.679432 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-03 00:22:41.679442 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:22:41.679452 | orchestrator | 2026-01-03 00:22:41.679479 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-01-03 00:22:41.679490 | orchestrator | Saturday 03 January 2026 00:22:40 +0000 (0:00:00.161) 0:00:26.588 ****** 2026-01-03 00:22:41.679499 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:22:41.679509 | orchestrator | 2026-01-03 00:22:41.679521 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-01-03 00:22:41.679542 | orchestrator | Saturday 03 January 2026 00:22:40 +0000 (0:00:00.052) 0:00:26.641 ****** 2026-01-03 00:22:41.679554 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:22:41.679565 | orchestrator | 2026-01-03 00:22:41.679576 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-01-03 00:22:41.679588 | orchestrator | Saturday 03 January 2026 00:22:40 +0000 (0:00:00.042) 0:00:26.683 ****** 2026-01-03 00:22:41.679599 | orchestrator | changed: [testbed-manager] 2026-01-03 00:22:41.679609 | orchestrator | 2026-01-03 00:22:41.679621 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:22:41.679632 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-03 00:22:41.679646 | orchestrator | 2026-01-03 00:22:41.679657 | orchestrator | 2026-01-03 00:22:41.679668 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:22:41.679679 | orchestrator | Saturday 03 January 2026 00:22:41 +0000 (0:00:00.709) 0:00:27.393 ****** 2026-01-03 00:22:41.679690 | orchestrator | =============================================================================== 2026-01-03 00:22:41.679701 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.93s 2026-01-03 00:22:41.679712 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.20s 2026-01-03 00:22:41.679724 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-01-03 00:22:41.679735 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-01-03 00:22:41.679747 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-01-03 00:22:41.679758 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-01-03 00:22:41.679789 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-01-03 00:22:41.679801 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-01-03 00:22:41.679811 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-01-03 00:22:41.679821 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-01-03 00:22:41.679831 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-01-03 00:22:41.679840 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-01-03 00:22:41.679857 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-01-03 00:22:41.679867 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-01-03 00:22:41.679877 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-01-03 00:22:41.679886 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-01-03 00:22:41.679896 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.71s 2026-01-03 00:22:41.679905 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-01-03 00:22:41.679915 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2026-01-03 00:22:41.679925 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.15s 2026-01-03 00:22:42.006361 | orchestrator | + osism apply squid 2026-01-03 00:22:54.033963 | orchestrator | 2026-01-03 00:22:54 | INFO  | Task a0a58b80-3d8d-4655-80a5-7410f6ae1ff7 (squid) was prepared for execution. 2026-01-03 00:22:54.034230 | orchestrator | 2026-01-03 00:22:54 | INFO  | It takes a moment until task a0a58b80-3d8d-4655-80a5-7410f6ae1ff7 (squid) has been started and output is visible here. 2026-01-03 00:24:50.417737 | orchestrator | 2026-01-03 00:24:50.417862 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-01-03 00:24:50.417883 | orchestrator | 2026-01-03 00:24:50.417929 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-01-03 00:24:50.417945 | orchestrator | Saturday 03 January 2026 00:22:58 +0000 (0:00:00.166) 0:00:00.166 ****** 2026-01-03 00:24:50.417958 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-01-03 00:24:50.418086 | orchestrator | 2026-01-03 00:24:50.418103 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-01-03 00:24:50.418117 | orchestrator | Saturday 03 January 2026 00:22:58 +0000 (0:00:00.082) 0:00:00.249 ****** 2026-01-03 00:24:50.418132 | orchestrator | ok: [testbed-manager] 2026-01-03 00:24:50.418149 | orchestrator | 2026-01-03 00:24:50.418163 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-01-03 00:24:50.418178 | orchestrator | Saturday 03 January 2026 00:22:59 +0000 (0:00:01.419) 0:00:01.669 ****** 2026-01-03 00:24:50.418193 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-01-03 00:24:50.418206 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-01-03 00:24:50.418219 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-01-03 00:24:50.418233 | orchestrator | 2026-01-03 00:24:50.418247 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-01-03 00:24:50.418259 | orchestrator | Saturday 03 January 2026 00:23:00 +0000 (0:00:01.139) 0:00:02.808 ****** 2026-01-03 00:24:50.418273 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-01-03 00:24:50.418286 | orchestrator | 2026-01-03 00:24:50.418300 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-01-03 00:24:50.418315 | orchestrator | Saturday 03 January 2026 00:23:01 +0000 (0:00:01.068) 0:00:03.876 ****** 2026-01-03 00:24:50.418330 | orchestrator | ok: [testbed-manager] 2026-01-03 00:24:50.418344 | orchestrator | 2026-01-03 00:24:50.418358 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-01-03 00:24:50.418372 | orchestrator | Saturday 03 January 2026 00:23:02 +0000 (0:00:00.342) 0:00:04.219 ****** 2026-01-03 00:24:50.418385 | orchestrator | changed: [testbed-manager] 2026-01-03 00:24:50.418397 | orchestrator | 2026-01-03 00:24:50.418411 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-01-03 00:24:50.418424 | orchestrator | Saturday 03 January 2026 00:23:03 +0000 (0:00:00.878) 0:00:05.098 ****** 2026-01-03 00:24:50.418437 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-01-03 00:24:50.418450 | orchestrator | ok: [testbed-manager] 2026-01-03 00:24:50.418464 | orchestrator | 2026-01-03 00:24:50.418477 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-01-03 00:24:50.418489 | orchestrator | Saturday 03 January 2026 00:23:37 +0000 (0:00:34.205) 0:00:39.303 ****** 2026-01-03 00:24:50.418502 | orchestrator | changed: [testbed-manager] 2026-01-03 00:24:50.418516 | orchestrator | 2026-01-03 00:24:50.418528 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-01-03 00:24:50.418541 | orchestrator | Saturday 03 January 2026 00:23:49 +0000 (0:00:12.018) 0:00:51.321 ****** 2026-01-03 00:24:50.418555 | orchestrator | Pausing for 60 seconds 2026-01-03 00:24:50.418567 | orchestrator | changed: [testbed-manager] 2026-01-03 00:24:50.418580 | orchestrator | 2026-01-03 00:24:50.418594 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-01-03 00:24:50.418606 | orchestrator | Saturday 03 January 2026 00:24:49 +0000 (0:01:00.078) 0:01:51.400 ****** 2026-01-03 00:24:50.418619 | orchestrator | ok: [testbed-manager] 2026-01-03 00:24:50.418632 | orchestrator | 2026-01-03 00:24:50.418644 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-01-03 00:24:50.418656 | orchestrator | Saturday 03 January 2026 00:24:49 +0000 (0:00:00.070) 0:01:51.470 ****** 2026-01-03 00:24:50.418668 | orchestrator | changed: [testbed-manager] 2026-01-03 00:24:50.418680 | orchestrator | 2026-01-03 00:24:50.418692 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:24:50.418720 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:24:50.418733 | orchestrator | 2026-01-03 00:24:50.418745 | orchestrator | 2026-01-03 00:24:50.418757 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:24:50.418770 | orchestrator | Saturday 03 January 2026 00:24:50 +0000 (0:00:00.605) 0:01:52.075 ****** 2026-01-03 00:24:50.418783 | orchestrator | =============================================================================== 2026-01-03 00:24:50.418796 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-01-03 00:24:50.418810 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 34.21s 2026-01-03 00:24:50.418824 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.02s 2026-01-03 00:24:50.418837 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.42s 2026-01-03 00:24:50.418849 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.14s 2026-01-03 00:24:50.418861 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.07s 2026-01-03 00:24:50.418873 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.88s 2026-01-03 00:24:50.418887 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.61s 2026-01-03 00:24:50.418900 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.34s 2026-01-03 00:24:50.418913 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-01-03 00:24:50.418927 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-01-03 00:24:50.694609 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-03 00:24:50.694680 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-01-03 00:24:50.700791 | orchestrator | + set -e 2026-01-03 00:24:50.700858 | orchestrator | + NAMESPACE=kolla 2026-01-03 00:24:50.700871 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-03 00:24:50.707412 | orchestrator | ++ semver latest 9.0.0 2026-01-03 00:24:50.770325 | orchestrator | + [[ -1 -lt 0 ]] 2026-01-03 00:24:50.770419 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-03 00:24:50.770434 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-01-03 00:25:02.865260 | orchestrator | 2026-01-03 00:25:02 | INFO  | Task a5253317-0bd6-4bc1-a01f-07556ce1369e (operator) was prepared for execution. 2026-01-03 00:25:02.865363 | orchestrator | 2026-01-03 00:25:02 | INFO  | It takes a moment until task a5253317-0bd6-4bc1-a01f-07556ce1369e (operator) has been started and output is visible here. 2026-01-03 00:25:19.089314 | orchestrator | 2026-01-03 00:25:19.089470 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-01-03 00:25:19.089499 | orchestrator | 2026-01-03 00:25:19.089520 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-03 00:25:19.089538 | orchestrator | Saturday 03 January 2026 00:25:07 +0000 (0:00:00.136) 0:00:00.136 ****** 2026-01-03 00:25:19.089549 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:25:19.089562 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:25:19.089577 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:25:19.089596 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:25:19.089614 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:25:19.089633 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:25:19.089651 | orchestrator | 2026-01-03 00:25:19.089668 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-01-03 00:25:19.089684 | orchestrator | Saturday 03 January 2026 00:25:10 +0000 (0:00:03.482) 0:00:03.619 ****** 2026-01-03 00:25:19.089697 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:25:19.089717 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:25:19.089737 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:25:19.089755 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:25:19.089770 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:25:19.089810 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:25:19.089824 | orchestrator | 2026-01-03 00:25:19.089838 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-01-03 00:25:19.089851 | orchestrator | 2026-01-03 00:25:19.089865 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-03 00:25:19.089879 | orchestrator | Saturday 03 January 2026 00:25:11 +0000 (0:00:00.742) 0:00:04.362 ****** 2026-01-03 00:25:19.089898 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:25:19.089917 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:25:19.089935 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:25:19.089955 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:25:19.089972 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:25:19.089991 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:25:19.090147 | orchestrator | 2026-01-03 00:25:19.090166 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-03 00:25:19.090176 | orchestrator | Saturday 03 January 2026 00:25:11 +0000 (0:00:00.150) 0:00:04.512 ****** 2026-01-03 00:25:19.090187 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:25:19.090198 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:25:19.090208 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:25:19.090219 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:25:19.090229 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:25:19.090239 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:25:19.090250 | orchestrator | 2026-01-03 00:25:19.090279 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-03 00:25:19.090291 | orchestrator | Saturday 03 January 2026 00:25:11 +0000 (0:00:00.175) 0:00:04.688 ****** 2026-01-03 00:25:19.090302 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:25:19.090318 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:25:19.090329 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:25:19.090340 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:25:19.090351 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:25:19.090361 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:25:19.090372 | orchestrator | 2026-01-03 00:25:19.090383 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-03 00:25:19.090394 | orchestrator | Saturday 03 January 2026 00:25:12 +0000 (0:00:00.677) 0:00:05.365 ****** 2026-01-03 00:25:19.090404 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:25:19.090415 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:25:19.090426 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:25:19.090436 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:25:19.090447 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:25:19.090457 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:25:19.090468 | orchestrator | 2026-01-03 00:25:19.090486 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-03 00:25:19.090505 | orchestrator | Saturday 03 January 2026 00:25:13 +0000 (0:00:00.810) 0:00:06.176 ****** 2026-01-03 00:25:19.090525 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-01-03 00:25:19.090544 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-01-03 00:25:19.090562 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-01-03 00:25:19.090579 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-01-03 00:25:19.090590 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-01-03 00:25:19.090608 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-01-03 00:25:19.090626 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-01-03 00:25:19.090643 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-01-03 00:25:19.090662 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-01-03 00:25:19.090680 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-01-03 00:25:19.090692 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-01-03 00:25:19.090702 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-01-03 00:25:19.090719 | orchestrator | 2026-01-03 00:25:19.090738 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-03 00:25:19.090757 | orchestrator | Saturday 03 January 2026 00:25:14 +0000 (0:00:01.272) 0:00:07.448 ****** 2026-01-03 00:25:19.090791 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:25:19.090810 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:25:19.090828 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:25:19.090847 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:25:19.090866 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:25:19.090884 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:25:19.090903 | orchestrator | 2026-01-03 00:25:19.090922 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-03 00:25:19.090941 | orchestrator | Saturday 03 January 2026 00:25:15 +0000 (0:00:01.211) 0:00:08.659 ****** 2026-01-03 00:25:19.090961 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-01-03 00:25:19.090981 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-01-03 00:25:19.090999 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-01-03 00:25:19.091057 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-01-03 00:25:19.091115 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-01-03 00:25:19.091138 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-01-03 00:25:19.091157 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-01-03 00:25:19.091177 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-01-03 00:25:19.091195 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-01-03 00:25:19.091214 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-01-03 00:25:19.091233 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-01-03 00:25:19.091246 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-01-03 00:25:19.091257 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-01-03 00:25:19.091267 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-01-03 00:25:19.091277 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-01-03 00:25:19.091288 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-01-03 00:25:19.091298 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-01-03 00:25:19.091309 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-01-03 00:25:19.091320 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-01-03 00:25:19.091330 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-01-03 00:25:19.091341 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-01-03 00:25:19.091351 | orchestrator | 2026-01-03 00:25:19.091362 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-03 00:25:19.091373 | orchestrator | Saturday 03 January 2026 00:25:16 +0000 (0:00:01.433) 0:00:10.092 ****** 2026-01-03 00:25:19.091384 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:25:19.091402 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:25:19.091421 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:25:19.091434 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:25:19.091445 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:25:19.091459 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:25:19.091477 | orchestrator | 2026-01-03 00:25:19.091497 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-03 00:25:19.091516 | orchestrator | Saturday 03 January 2026 00:25:17 +0000 (0:00:00.183) 0:00:10.275 ****** 2026-01-03 00:25:19.091535 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:25:19.091554 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:25:19.091573 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:25:19.091590 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:25:19.091607 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:25:19.091633 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:25:19.091652 | orchestrator | 2026-01-03 00:25:19.091672 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-03 00:25:19.091691 | orchestrator | Saturday 03 January 2026 00:25:17 +0000 (0:00:00.195) 0:00:10.471 ****** 2026-01-03 00:25:19.091707 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:25:19.091718 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:25:19.091729 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:25:19.091739 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:25:19.091750 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:25:19.091760 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:25:19.091770 | orchestrator | 2026-01-03 00:25:19.091781 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-03 00:25:19.091792 | orchestrator | Saturday 03 January 2026 00:25:17 +0000 (0:00:00.582) 0:00:11.054 ****** 2026-01-03 00:25:19.091802 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:25:19.091813 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:25:19.091823 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:25:19.091834 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:25:19.091844 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:25:19.091855 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:25:19.091865 | orchestrator | 2026-01-03 00:25:19.091876 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-03 00:25:19.091887 | orchestrator | Saturday 03 January 2026 00:25:18 +0000 (0:00:00.158) 0:00:11.212 ****** 2026-01-03 00:25:19.091897 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-03 00:25:19.091908 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-03 00:25:19.091919 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:25:19.091930 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:25:19.091941 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-03 00:25:19.091951 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:25:19.091962 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-03 00:25:19.091978 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-03 00:25:19.091996 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:25:19.092084 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:25:19.092096 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-03 00:25:19.092107 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:25:19.092118 | orchestrator | 2026-01-03 00:25:19.092129 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-03 00:25:19.092140 | orchestrator | Saturday 03 January 2026 00:25:18 +0000 (0:00:00.687) 0:00:11.900 ****** 2026-01-03 00:25:19.092150 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:25:19.092161 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:25:19.092172 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:25:19.092182 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:25:19.092193 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:25:19.092203 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:25:19.092214 | orchestrator | 2026-01-03 00:25:19.092225 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-03 00:25:19.092235 | orchestrator | Saturday 03 January 2026 00:25:18 +0000 (0:00:00.149) 0:00:12.049 ****** 2026-01-03 00:25:19.092246 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:25:19.092257 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:25:19.092268 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:25:19.092278 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:25:19.092307 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:25:20.376800 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:25:20.376901 | orchestrator | 2026-01-03 00:25:20.376912 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-03 00:25:20.376921 | orchestrator | Saturday 03 January 2026 00:25:19 +0000 (0:00:00.162) 0:00:12.211 ****** 2026-01-03 00:25:20.376928 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:25:20.376968 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:25:20.376976 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:25:20.376983 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:25:20.376990 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:25:20.376997 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:25:20.377024 | orchestrator | 2026-01-03 00:25:20.377032 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-03 00:25:20.377038 | orchestrator | Saturday 03 January 2026 00:25:19 +0000 (0:00:00.162) 0:00:12.374 ****** 2026-01-03 00:25:20.377044 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:25:20.377051 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:25:20.377057 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:25:20.377063 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:25:20.377070 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:25:20.377076 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:25:20.377083 | orchestrator | 2026-01-03 00:25:20.377088 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-03 00:25:20.377094 | orchestrator | Saturday 03 January 2026 00:25:19 +0000 (0:00:00.646) 0:00:13.021 ****** 2026-01-03 00:25:20.377100 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:25:20.377106 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:25:20.377112 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:25:20.377118 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:25:20.377125 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:25:20.377131 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:25:20.377137 | orchestrator | 2026-01-03 00:25:20.377144 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:25:20.377152 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-03 00:25:20.377160 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-03 00:25:20.377183 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-03 00:25:20.377194 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-03 00:25:20.377201 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-03 00:25:20.377207 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-03 00:25:20.377214 | orchestrator | 2026-01-03 00:25:20.377220 | orchestrator | 2026-01-03 00:25:20.377227 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:25:20.377234 | orchestrator | Saturday 03 January 2026 00:25:20 +0000 (0:00:00.242) 0:00:13.263 ****** 2026-01-03 00:25:20.377240 | orchestrator | =============================================================================== 2026-01-03 00:25:20.377247 | orchestrator | Gathering Facts --------------------------------------------------------- 3.48s 2026-01-03 00:25:20.377254 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.43s 2026-01-03 00:25:20.377262 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.27s 2026-01-03 00:25:20.377268 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.21s 2026-01-03 00:25:20.377275 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.81s 2026-01-03 00:25:20.377281 | orchestrator | Do not require tty for all users ---------------------------------------- 0.74s 2026-01-03 00:25:20.377287 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2026-01-03 00:25:20.377302 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.68s 2026-01-03 00:25:20.377309 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.65s 2026-01-03 00:25:20.377315 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.58s 2026-01-03 00:25:20.377322 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.24s 2026-01-03 00:25:20.377328 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.20s 2026-01-03 00:25:20.377335 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2026-01-03 00:25:20.377341 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2026-01-03 00:25:20.377347 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2026-01-03 00:25:20.377353 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2026-01-03 00:25:20.377359 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2026-01-03 00:25:20.377365 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.15s 2026-01-03 00:25:20.377372 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-01-03 00:25:20.677815 | orchestrator | + osism apply --environment custom facts 2026-01-03 00:25:22.604896 | orchestrator | 2026-01-03 00:25:22 | INFO  | Trying to run play facts in environment custom 2026-01-03 00:25:32.815963 | orchestrator | 2026-01-03 00:25:32 | INFO  | Task a249fd2d-4699-4705-af43-37ffd914ad5c (facts) was prepared for execution. 2026-01-03 00:25:32.816109 | orchestrator | 2026-01-03 00:25:32 | INFO  | It takes a moment until task a249fd2d-4699-4705-af43-37ffd914ad5c (facts) has been started and output is visible here. 2026-01-03 00:26:18.196202 | orchestrator | 2026-01-03 00:26:18.196305 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-01-03 00:26:18.196320 | orchestrator | 2026-01-03 00:26:18.196331 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-03 00:26:18.196342 | orchestrator | Saturday 03 January 2026 00:25:36 +0000 (0:00:00.081) 0:00:00.081 ****** 2026-01-03 00:26:18.196352 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:18.196363 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:26:18.196373 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:26:18.196383 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:26:18.196393 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:26:18.196402 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:26:18.196412 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:26:18.196422 | orchestrator | 2026-01-03 00:26:18.196432 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-01-03 00:26:18.196442 | orchestrator | Saturday 03 January 2026 00:25:38 +0000 (0:00:01.397) 0:00:01.479 ****** 2026-01-03 00:26:18.196451 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:18.196461 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:26:18.196471 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:26:18.196481 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:26:18.196490 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:26:18.196500 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:26:18.196510 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:26:18.196520 | orchestrator | 2026-01-03 00:26:18.196530 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-01-03 00:26:18.196540 | orchestrator | 2026-01-03 00:26:18.196550 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-03 00:26:18.196560 | orchestrator | Saturday 03 January 2026 00:25:39 +0000 (0:00:01.271) 0:00:02.751 ****** 2026-01-03 00:26:18.196569 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:18.196579 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:18.196589 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:18.196599 | orchestrator | 2026-01-03 00:26:18.196627 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-03 00:26:18.196646 | orchestrator | Saturday 03 January 2026 00:25:39 +0000 (0:00:00.118) 0:00:02.870 ****** 2026-01-03 00:26:18.196656 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:18.196666 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:18.196676 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:18.196688 | orchestrator | 2026-01-03 00:26:18.196700 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-03 00:26:18.196711 | orchestrator | Saturday 03 January 2026 00:25:39 +0000 (0:00:00.193) 0:00:03.063 ****** 2026-01-03 00:26:18.196723 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:18.196734 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:18.196746 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:18.196756 | orchestrator | 2026-01-03 00:26:18.196768 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-03 00:26:18.196780 | orchestrator | Saturday 03 January 2026 00:25:39 +0000 (0:00:00.214) 0:00:03.278 ****** 2026-01-03 00:26:18.196792 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:26:18.196804 | orchestrator | 2026-01-03 00:26:18.196816 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-03 00:26:18.196828 | orchestrator | Saturday 03 January 2026 00:25:40 +0000 (0:00:00.144) 0:00:03.423 ****** 2026-01-03 00:26:18.196840 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:18.196852 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:18.196863 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:18.196874 | orchestrator | 2026-01-03 00:26:18.196885 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-03 00:26:18.196896 | orchestrator | Saturday 03 January 2026 00:25:40 +0000 (0:00:00.441) 0:00:03.864 ****** 2026-01-03 00:26:18.196908 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:26:18.196919 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:26:18.196930 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:26:18.196942 | orchestrator | 2026-01-03 00:26:18.196954 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-03 00:26:18.196966 | orchestrator | Saturday 03 January 2026 00:25:40 +0000 (0:00:00.121) 0:00:03.986 ****** 2026-01-03 00:26:18.196977 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:26:18.196988 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:26:18.196999 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:26:18.197010 | orchestrator | 2026-01-03 00:26:18.197021 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-03 00:26:18.197033 | orchestrator | Saturday 03 January 2026 00:25:41 +0000 (0:00:01.065) 0:00:05.052 ****** 2026-01-03 00:26:18.197044 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:18.197055 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:18.197065 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:18.197100 | orchestrator | 2026-01-03 00:26:18.197112 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-03 00:26:18.197121 | orchestrator | Saturday 03 January 2026 00:25:42 +0000 (0:00:00.490) 0:00:05.542 ****** 2026-01-03 00:26:18.197131 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:26:18.197141 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:26:18.197151 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:26:18.197160 | orchestrator | 2026-01-03 00:26:18.197170 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-03 00:26:18.197180 | orchestrator | Saturday 03 January 2026 00:25:43 +0000 (0:00:01.109) 0:00:06.651 ****** 2026-01-03 00:26:18.197189 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:26:18.197199 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:26:18.197208 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:26:18.197218 | orchestrator | 2026-01-03 00:26:18.197233 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-01-03 00:26:18.197259 | orchestrator | Saturday 03 January 2026 00:26:00 +0000 (0:00:16.725) 0:00:23.377 ****** 2026-01-03 00:26:18.197275 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:26:18.197291 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:26:18.197306 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:26:18.197322 | orchestrator | 2026-01-03 00:26:18.197337 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-01-03 00:26:18.197372 | orchestrator | Saturday 03 January 2026 00:26:00 +0000 (0:00:00.080) 0:00:23.458 ****** 2026-01-03 00:26:18.197391 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:26:18.197406 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:26:18.197422 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:26:18.197434 | orchestrator | 2026-01-03 00:26:18.197443 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-03 00:26:18.197453 | orchestrator | Saturday 03 January 2026 00:26:08 +0000 (0:00:08.624) 0:00:32.083 ****** 2026-01-03 00:26:18.197462 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:18.197472 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:18.197481 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:18.197491 | orchestrator | 2026-01-03 00:26:18.197500 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-03 00:26:18.197510 | orchestrator | Saturday 03 January 2026 00:26:09 +0000 (0:00:00.435) 0:00:32.518 ****** 2026-01-03 00:26:18.197520 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-01-03 00:26:18.197530 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-01-03 00:26:18.197539 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-01-03 00:26:18.197549 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-01-03 00:26:18.197558 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-01-03 00:26:18.197568 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-01-03 00:26:18.197577 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-01-03 00:26:18.197587 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-01-03 00:26:18.197596 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-01-03 00:26:18.197606 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-01-03 00:26:18.197615 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-01-03 00:26:18.197625 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-01-03 00:26:18.197634 | orchestrator | 2026-01-03 00:26:18.197643 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-03 00:26:18.197653 | orchestrator | Saturday 03 January 2026 00:26:12 +0000 (0:00:03.763) 0:00:36.282 ****** 2026-01-03 00:26:18.197662 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:18.197672 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:18.197681 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:18.197691 | orchestrator | 2026-01-03 00:26:18.197700 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-03 00:26:18.197710 | orchestrator | 2026-01-03 00:26:18.197719 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-03 00:26:18.197729 | orchestrator | Saturday 03 January 2026 00:26:14 +0000 (0:00:01.420) 0:00:37.703 ****** 2026-01-03 00:26:18.197738 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:18.197748 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:18.197757 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:18.197767 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:18.197776 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:18.197786 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:18.197795 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:18.197804 | orchestrator | 2026-01-03 00:26:18.197814 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:26:18.197853 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:26:18.197873 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:26:18.197885 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:26:18.197895 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:26:18.197904 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:26:18.197914 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:26:18.197924 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:26:18.197933 | orchestrator | 2026-01-03 00:26:18.197943 | orchestrator | 2026-01-03 00:26:18.197953 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:26:18.197962 | orchestrator | Saturday 03 January 2026 00:26:18 +0000 (0:00:03.764) 0:00:41.467 ****** 2026-01-03 00:26:18.197972 | orchestrator | =============================================================================== 2026-01-03 00:26:18.197981 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.73s 2026-01-03 00:26:18.197991 | orchestrator | Install required packages (Debian) -------------------------------------- 8.63s 2026-01-03 00:26:18.198000 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.76s 2026-01-03 00:26:18.198010 | orchestrator | Copy fact files --------------------------------------------------------- 3.76s 2026-01-03 00:26:18.198136 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.42s 2026-01-03 00:26:18.198156 | orchestrator | Create custom facts directory ------------------------------------------- 1.40s 2026-01-03 00:26:18.198185 | orchestrator | Copy fact file ---------------------------------------------------------- 1.27s 2026-01-03 00:26:18.400514 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.11s 2026-01-03 00:26:18.400643 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.07s 2026-01-03 00:26:18.400665 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.49s 2026-01-03 00:26:18.400684 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2026-01-03 00:26:18.400703 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2026-01-03 00:26:18.400722 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2026-01-03 00:26:18.400739 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2026-01-03 00:26:18.400756 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2026-01-03 00:26:18.400774 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2026-01-03 00:26:18.400791 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2026-01-03 00:26:18.400808 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.08s 2026-01-03 00:26:18.660723 | orchestrator | + osism apply bootstrap 2026-01-03 00:26:30.710476 | orchestrator | 2026-01-03 00:26:30 | INFO  | Task 99f22d1a-e361-45ed-934d-b69b42448166 (bootstrap) was prepared for execution. 2026-01-03 00:26:30.710583 | orchestrator | 2026-01-03 00:26:30 | INFO  | It takes a moment until task 99f22d1a-e361-45ed-934d-b69b42448166 (bootstrap) has been started and output is visible here. 2026-01-03 00:26:46.676548 | orchestrator | 2026-01-03 00:26:46.676673 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-01-03 00:26:46.676717 | orchestrator | 2026-01-03 00:26:46.676729 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-01-03 00:26:46.676741 | orchestrator | Saturday 03 January 2026 00:26:34 +0000 (0:00:00.150) 0:00:00.150 ****** 2026-01-03 00:26:46.676752 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:46.676765 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:46.676776 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:46.676786 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:46.676797 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:46.676808 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:46.676819 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:46.676829 | orchestrator | 2026-01-03 00:26:46.676841 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-03 00:26:46.676852 | orchestrator | 2026-01-03 00:26:46.676863 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-03 00:26:46.676874 | orchestrator | Saturday 03 January 2026 00:26:35 +0000 (0:00:00.236) 0:00:00.387 ****** 2026-01-03 00:26:46.676884 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:46.676895 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:46.676906 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:46.676917 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:46.676928 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:46.676939 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:46.676950 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:46.676961 | orchestrator | 2026-01-03 00:26:46.676972 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-01-03 00:26:46.676983 | orchestrator | 2026-01-03 00:26:46.676994 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-03 00:26:46.677005 | orchestrator | Saturday 03 January 2026 00:26:39 +0000 (0:00:03.858) 0:00:04.245 ****** 2026-01-03 00:26:46.677016 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-03 00:26:46.677028 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-03 00:26:46.677039 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-03 00:26:46.677050 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-01-03 00:26:46.677061 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-03 00:26:46.677071 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-01-03 00:26:46.677082 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-03 00:26:46.677093 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-03 00:26:46.677157 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:26:46.677170 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-03 00:26:46.677181 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-03 00:26:46.677192 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-03 00:26:46.677203 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-03 00:26:46.677214 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:26:46.677225 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-03 00:26:46.677236 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:26:46.677247 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-03 00:26:46.677257 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-01-03 00:26:46.677268 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-03 00:26:46.677279 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:26:46.677290 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:26:46.677301 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-01-03 00:26:46.677312 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-03 00:26:46.677323 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-03 00:26:46.677341 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-01-03 00:26:46.677352 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-03 00:26:46.677363 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-03 00:26:46.677374 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-03 00:26:46.677384 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-03 00:26:46.677395 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-01-03 00:26:46.677406 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-03 00:26:46.677417 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-03 00:26:46.677427 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:26:46.677438 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-03 00:26:46.677449 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-03 00:26:46.677459 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-03 00:26:46.677470 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-03 00:26:46.677481 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-03 00:26:46.677491 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-03 00:26:46.677502 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-03 00:26:46.677513 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-03 00:26:46.677524 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:26:46.677535 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-03 00:26:46.677545 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-03 00:26:46.677556 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-03 00:26:46.677567 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-03 00:26:46.677596 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-03 00:26:46.677608 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-03 00:26:46.677619 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-03 00:26:46.677629 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-03 00:26:46.677640 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:26:46.677651 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-03 00:26:46.677662 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-03 00:26:46.677673 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:26:46.677684 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-03 00:26:46.677694 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:26:46.677705 | orchestrator | 2026-01-03 00:26:46.677716 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-01-03 00:26:46.677727 | orchestrator | 2026-01-03 00:26:46.677738 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-01-03 00:26:46.677748 | orchestrator | Saturday 03 January 2026 00:26:39 +0000 (0:00:00.424) 0:00:04.670 ****** 2026-01-03 00:26:46.677759 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:46.677770 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:46.677781 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:46.677791 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:46.677802 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:46.677813 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:46.677823 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:46.677834 | orchestrator | 2026-01-03 00:26:46.677845 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-01-03 00:26:46.677856 | orchestrator | Saturday 03 January 2026 00:26:40 +0000 (0:00:01.211) 0:00:05.881 ****** 2026-01-03 00:26:46.677867 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:46.677878 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:46.677896 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:46.677907 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:46.677917 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:46.677928 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:46.677939 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:46.677949 | orchestrator | 2026-01-03 00:26:46.677960 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-01-03 00:26:46.677971 | orchestrator | Saturday 03 January 2026 00:26:41 +0000 (0:00:01.192) 0:00:07.074 ****** 2026-01-03 00:26:46.677983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:26:46.677996 | orchestrator | 2026-01-03 00:26:46.678007 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-01-03 00:26:46.678075 | orchestrator | Saturday 03 January 2026 00:26:42 +0000 (0:00:00.273) 0:00:07.348 ****** 2026-01-03 00:26:46.678089 | orchestrator | changed: [testbed-manager] 2026-01-03 00:26:46.678101 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:26:46.678150 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:26:46.678161 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:26:46.678172 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:26:46.678183 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:26:46.678194 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:26:46.678205 | orchestrator | 2026-01-03 00:26:46.678216 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-01-03 00:26:46.678227 | orchestrator | Saturday 03 January 2026 00:26:44 +0000 (0:00:02.091) 0:00:09.439 ****** 2026-01-03 00:26:46.678261 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:26:46.678275 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:26:46.678287 | orchestrator | 2026-01-03 00:26:46.678299 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-01-03 00:26:46.678310 | orchestrator | Saturday 03 January 2026 00:26:44 +0000 (0:00:00.241) 0:00:09.681 ****** 2026-01-03 00:26:46.678321 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:26:46.678332 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:26:46.678342 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:26:46.678353 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:26:46.678364 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:26:46.678375 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:26:46.678386 | orchestrator | 2026-01-03 00:26:46.678396 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-01-03 00:26:46.678407 | orchestrator | Saturday 03 January 2026 00:26:45 +0000 (0:00:01.073) 0:00:10.754 ****** 2026-01-03 00:26:46.678418 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:26:46.678429 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:26:46.678440 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:26:46.678461 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:26:46.678473 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:26:46.678485 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:26:46.678504 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:26:46.678522 | orchestrator | 2026-01-03 00:26:46.678542 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-01-03 00:26:46.678559 | orchestrator | Saturday 03 January 2026 00:26:46 +0000 (0:00:00.586) 0:00:11.340 ****** 2026-01-03 00:26:46.678576 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:26:46.678593 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:26:46.678609 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:26:46.678626 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:26:46.678643 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:26:46.678662 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:26:46.678698 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:46.678717 | orchestrator | 2026-01-03 00:26:46.678732 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-03 00:26:46.678744 | orchestrator | Saturday 03 January 2026 00:26:46 +0000 (0:00:00.404) 0:00:11.744 ****** 2026-01-03 00:26:46.678755 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:26:46.678766 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:26:46.678789 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:26:59.926336 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:26:59.926478 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:26:59.926505 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:26:59.926525 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:26:59.926544 | orchestrator | 2026-01-03 00:26:59.926566 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-03 00:26:59.926587 | orchestrator | Saturday 03 January 2026 00:26:46 +0000 (0:00:00.219) 0:00:11.963 ****** 2026-01-03 00:26:59.926608 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:26:59.926642 | orchestrator | 2026-01-03 00:26:59.926656 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-03 00:26:59.926667 | orchestrator | Saturday 03 January 2026 00:26:47 +0000 (0:00:00.281) 0:00:12.245 ****** 2026-01-03 00:26:59.926678 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:26:59.926689 | orchestrator | 2026-01-03 00:26:59.926700 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-03 00:26:59.926711 | orchestrator | Saturday 03 January 2026 00:26:47 +0000 (0:00:00.305) 0:00:12.550 ****** 2026-01-03 00:26:59.926722 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:59.926734 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:59.926744 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:59.926755 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:59.926768 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:59.926781 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:59.926797 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:59.926815 | orchestrator | 2026-01-03 00:26:59.926834 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-03 00:26:59.926853 | orchestrator | Saturday 03 January 2026 00:26:48 +0000 (0:00:01.531) 0:00:14.082 ****** 2026-01-03 00:26:59.926871 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:26:59.926890 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:26:59.926911 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:26:59.926926 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:26:59.926940 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:26:59.926953 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:26:59.926966 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:26:59.926979 | orchestrator | 2026-01-03 00:26:59.926992 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-03 00:26:59.927005 | orchestrator | Saturday 03 January 2026 00:26:49 +0000 (0:00:00.228) 0:00:14.310 ****** 2026-01-03 00:26:59.927024 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:59.927042 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:59.927060 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:59.927080 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:59.927099 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:59.927145 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:59.927162 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:59.927174 | orchestrator | 2026-01-03 00:26:59.927185 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-03 00:26:59.927196 | orchestrator | Saturday 03 January 2026 00:26:49 +0000 (0:00:00.681) 0:00:14.992 ****** 2026-01-03 00:26:59.927236 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:26:59.927247 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:26:59.927257 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:26:59.927268 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:26:59.927279 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:26:59.927289 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:26:59.927300 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:26:59.927310 | orchestrator | 2026-01-03 00:26:59.927321 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-03 00:26:59.927333 | orchestrator | Saturday 03 January 2026 00:26:50 +0000 (0:00:00.313) 0:00:15.305 ****** 2026-01-03 00:26:59.927344 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:59.927355 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:26:59.927365 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:26:59.927376 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:26:59.927386 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:26:59.927397 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:26:59.927407 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:26:59.927417 | orchestrator | 2026-01-03 00:26:59.927435 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-03 00:26:59.927454 | orchestrator | Saturday 03 January 2026 00:26:50 +0000 (0:00:00.587) 0:00:15.893 ****** 2026-01-03 00:26:59.927472 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:59.927490 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:26:59.927510 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:26:59.927527 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:26:59.927539 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:26:59.927549 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:26:59.927560 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:26:59.927570 | orchestrator | 2026-01-03 00:26:59.927581 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-03 00:26:59.927591 | orchestrator | Saturday 03 January 2026 00:26:51 +0000 (0:00:01.181) 0:00:17.074 ****** 2026-01-03 00:26:59.927602 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:59.927613 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:59.927623 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:59.927634 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:59.927645 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:59.927655 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:59.927666 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:59.927676 | orchestrator | 2026-01-03 00:26:59.927687 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-03 00:26:59.927698 | orchestrator | Saturday 03 January 2026 00:26:53 +0000 (0:00:01.950) 0:00:19.025 ****** 2026-01-03 00:26:59.927739 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:26:59.927753 | orchestrator | 2026-01-03 00:26:59.927772 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-03 00:26:59.927790 | orchestrator | Saturday 03 January 2026 00:26:54 +0000 (0:00:00.317) 0:00:19.342 ****** 2026-01-03 00:26:59.927808 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:26:59.927828 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:26:59.927845 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:26:59.927856 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:26:59.927866 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:26:59.927877 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:26:59.927888 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:26:59.927898 | orchestrator | 2026-01-03 00:26:59.927909 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-03 00:26:59.927920 | orchestrator | Saturday 03 January 2026 00:26:55 +0000 (0:00:01.273) 0:00:20.616 ****** 2026-01-03 00:26:59.927942 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:59.927953 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:59.927963 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:59.927974 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:59.927984 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:59.927996 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:59.928015 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:59.928033 | orchestrator | 2026-01-03 00:26:59.928051 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-03 00:26:59.928069 | orchestrator | Saturday 03 January 2026 00:26:55 +0000 (0:00:00.207) 0:00:20.824 ****** 2026-01-03 00:26:59.928088 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:59.928105 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:59.928116 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:59.928157 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:59.928168 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:59.928178 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:59.928189 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:59.928199 | orchestrator | 2026-01-03 00:26:59.928210 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-03 00:26:59.928221 | orchestrator | Saturday 03 January 2026 00:26:55 +0000 (0:00:00.205) 0:00:21.030 ****** 2026-01-03 00:26:59.928232 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:59.928242 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:59.928253 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:59.928263 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:59.928274 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:59.928284 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:59.928294 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:59.928305 | orchestrator | 2026-01-03 00:26:59.928316 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-03 00:26:59.928327 | orchestrator | Saturday 03 January 2026 00:26:56 +0000 (0:00:00.230) 0:00:21.260 ****** 2026-01-03 00:26:59.928338 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:26:59.928351 | orchestrator | 2026-01-03 00:26:59.928362 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-03 00:26:59.928372 | orchestrator | Saturday 03 January 2026 00:26:56 +0000 (0:00:00.264) 0:00:21.525 ****** 2026-01-03 00:26:59.928383 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:59.928394 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:59.928404 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:59.928415 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:59.928425 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:59.928436 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:59.928446 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:59.928457 | orchestrator | 2026-01-03 00:26:59.928467 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-03 00:26:59.928478 | orchestrator | Saturday 03 January 2026 00:26:56 +0000 (0:00:00.550) 0:00:22.075 ****** 2026-01-03 00:26:59.928489 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:26:59.928499 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:26:59.928510 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:26:59.928521 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:26:59.928531 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:26:59.928542 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:26:59.928552 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:26:59.928563 | orchestrator | 2026-01-03 00:26:59.928574 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-03 00:26:59.928584 | orchestrator | Saturday 03 January 2026 00:26:57 +0000 (0:00:00.214) 0:00:22.290 ****** 2026-01-03 00:26:59.928595 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:59.928616 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:59.928626 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:59.928637 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:59.928648 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:26:59.928658 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:26:59.928669 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:26:59.928679 | orchestrator | 2026-01-03 00:26:59.928690 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-03 00:26:59.928700 | orchestrator | Saturday 03 January 2026 00:26:58 +0000 (0:00:01.115) 0:00:23.405 ****** 2026-01-03 00:26:59.928711 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:59.928721 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:59.928732 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:59.928742 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:59.928753 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:26:59.928764 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:26:59.928774 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:26:59.928785 | orchestrator | 2026-01-03 00:26:59.928796 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-03 00:26:59.928806 | orchestrator | Saturday 03 January 2026 00:26:58 +0000 (0:00:00.565) 0:00:23.970 ****** 2026-01-03 00:26:59.928817 | orchestrator | ok: [testbed-manager] 2026-01-03 00:26:59.928828 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:26:59.928838 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:26:59.928849 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:26:59.928868 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:27:41.482409 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:27:41.482547 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:27:41.482566 | orchestrator | 2026-01-03 00:27:41.482580 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-03 00:27:41.482593 | orchestrator | Saturday 03 January 2026 00:26:59 +0000 (0:00:01.146) 0:00:25.117 ****** 2026-01-03 00:27:41.482604 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:27:41.482616 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:27:41.482627 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:27:41.482638 | orchestrator | changed: [testbed-manager] 2026-01-03 00:27:41.482649 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:27:41.482659 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:27:41.482670 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:27:41.482681 | orchestrator | 2026-01-03 00:27:41.482693 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-01-03 00:27:41.482704 | orchestrator | Saturday 03 January 2026 00:27:17 +0000 (0:00:17.094) 0:00:42.212 ****** 2026-01-03 00:27:41.482715 | orchestrator | ok: [testbed-manager] 2026-01-03 00:27:41.482726 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:27:41.482736 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:27:41.482747 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:27:41.482759 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:27:41.482770 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:27:41.482780 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:27:41.482791 | orchestrator | 2026-01-03 00:27:41.482802 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-01-03 00:27:41.482813 | orchestrator | Saturday 03 January 2026 00:27:17 +0000 (0:00:00.204) 0:00:42.417 ****** 2026-01-03 00:27:41.482824 | orchestrator | ok: [testbed-manager] 2026-01-03 00:27:41.482834 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:27:41.482845 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:27:41.482856 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:27:41.482866 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:27:41.482877 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:27:41.482887 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:27:41.482898 | orchestrator | 2026-01-03 00:27:41.482937 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-01-03 00:27:41.482951 | orchestrator | Saturday 03 January 2026 00:27:17 +0000 (0:00:00.206) 0:00:42.623 ****** 2026-01-03 00:27:41.482964 | orchestrator | ok: [testbed-manager] 2026-01-03 00:27:41.483016 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:27:41.483038 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:27:41.483057 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:27:41.483076 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:27:41.483095 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:27:41.483114 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:27:41.483134 | orchestrator | 2026-01-03 00:27:41.483154 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-01-03 00:27:41.483203 | orchestrator | Saturday 03 January 2026 00:27:17 +0000 (0:00:00.214) 0:00:42.837 ****** 2026-01-03 00:27:41.483218 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:27:41.483234 | orchestrator | 2026-01-03 00:27:41.483248 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-01-03 00:27:41.483261 | orchestrator | Saturday 03 January 2026 00:27:17 +0000 (0:00:00.243) 0:00:43.081 ****** 2026-01-03 00:27:41.483275 | orchestrator | ok: [testbed-manager] 2026-01-03 00:27:41.483286 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:27:41.483297 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:27:41.483307 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:27:41.483318 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:27:41.483329 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:27:41.483339 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:27:41.483350 | orchestrator | 2026-01-03 00:27:41.483360 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-01-03 00:27:41.483371 | orchestrator | Saturday 03 January 2026 00:27:19 +0000 (0:00:01.772) 0:00:44.853 ****** 2026-01-03 00:27:41.483402 | orchestrator | changed: [testbed-manager] 2026-01-03 00:27:41.483413 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:27:41.483424 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:27:41.483435 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:27:41.483446 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:27:41.483456 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:27:41.483467 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:27:41.483480 | orchestrator | 2026-01-03 00:27:41.483499 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-01-03 00:27:41.483517 | orchestrator | Saturday 03 January 2026 00:27:20 +0000 (0:00:01.153) 0:00:46.006 ****** 2026-01-03 00:27:41.483535 | orchestrator | ok: [testbed-manager] 2026-01-03 00:27:41.483555 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:27:41.483572 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:27:41.483592 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:27:41.483603 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:27:41.483614 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:27:41.483624 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:27:41.483639 | orchestrator | 2026-01-03 00:27:41.483657 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-01-03 00:27:41.483676 | orchestrator | Saturday 03 January 2026 00:27:21 +0000 (0:00:00.910) 0:00:46.917 ****** 2026-01-03 00:27:41.483696 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:27:41.483717 | orchestrator | 2026-01-03 00:27:41.483735 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-01-03 00:27:41.483755 | orchestrator | Saturday 03 January 2026 00:27:21 +0000 (0:00:00.269) 0:00:47.186 ****** 2026-01-03 00:27:41.483766 | orchestrator | changed: [testbed-manager] 2026-01-03 00:27:41.483777 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:27:41.483788 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:27:41.483798 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:27:41.483809 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:27:41.483831 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:27:41.483842 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:27:41.483852 | orchestrator | 2026-01-03 00:27:41.483892 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-01-03 00:27:41.483904 | orchestrator | Saturday 03 January 2026 00:27:22 +0000 (0:00:01.012) 0:00:48.199 ****** 2026-01-03 00:27:41.483915 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:27:41.483926 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:27:41.483937 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:27:41.483947 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:27:41.483958 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:27:41.483968 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:27:41.483979 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:27:41.483990 | orchestrator | 2026-01-03 00:27:41.484000 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-01-03 00:27:41.484011 | orchestrator | Saturday 03 January 2026 00:27:23 +0000 (0:00:00.210) 0:00:48.409 ****** 2026-01-03 00:27:41.484023 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:27:41.484034 | orchestrator | 2026-01-03 00:27:41.484045 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-01-03 00:27:41.484055 | orchestrator | Saturday 03 January 2026 00:27:23 +0000 (0:00:00.282) 0:00:48.691 ****** 2026-01-03 00:27:41.484066 | orchestrator | ok: [testbed-manager] 2026-01-03 00:27:41.484077 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:27:41.484088 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:27:41.484098 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:27:41.484109 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:27:41.484119 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:27:41.484130 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:27:41.484140 | orchestrator | 2026-01-03 00:27:41.484151 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-01-03 00:27:41.484182 | orchestrator | Saturday 03 January 2026 00:27:25 +0000 (0:00:02.016) 0:00:50.708 ****** 2026-01-03 00:27:41.484194 | orchestrator | changed: [testbed-manager] 2026-01-03 00:27:41.484205 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:27:41.484215 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:27:41.484226 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:27:41.484237 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:27:41.484247 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:27:41.484258 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:27:41.484269 | orchestrator | 2026-01-03 00:27:41.484279 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-01-03 00:27:41.484290 | orchestrator | Saturday 03 January 2026 00:27:26 +0000 (0:00:01.197) 0:00:51.905 ****** 2026-01-03 00:27:41.484301 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:27:41.484312 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:27:41.484322 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:27:41.484333 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:27:41.484344 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:27:41.484354 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:27:41.484365 | orchestrator | changed: [testbed-manager] 2026-01-03 00:27:41.484376 | orchestrator | 2026-01-03 00:27:41.484387 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-01-03 00:27:41.484398 | orchestrator | Saturday 03 January 2026 00:27:38 +0000 (0:00:11.494) 0:01:03.400 ****** 2026-01-03 00:27:41.484408 | orchestrator | ok: [testbed-manager] 2026-01-03 00:27:41.484419 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:27:41.484430 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:27:41.484440 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:27:41.484451 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:27:41.484462 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:27:41.484472 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:27:41.484491 | orchestrator | 2026-01-03 00:27:41.484502 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-01-03 00:27:41.484512 | orchestrator | Saturday 03 January 2026 00:27:39 +0000 (0:00:01.470) 0:01:04.870 ****** 2026-01-03 00:27:41.484523 | orchestrator | ok: [testbed-manager] 2026-01-03 00:27:41.484534 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:27:41.484544 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:27:41.484555 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:27:41.484565 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:27:41.484576 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:27:41.484586 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:27:41.484597 | orchestrator | 2026-01-03 00:27:41.484608 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-01-03 00:27:41.484619 | orchestrator | Saturday 03 January 2026 00:27:40 +0000 (0:00:01.074) 0:01:05.944 ****** 2026-01-03 00:27:41.484629 | orchestrator | ok: [testbed-manager] 2026-01-03 00:27:41.484640 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:27:41.484651 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:27:41.484661 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:27:41.484672 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:27:41.484682 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:27:41.484693 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:27:41.484703 | orchestrator | 2026-01-03 00:27:41.484714 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-01-03 00:27:41.484725 | orchestrator | Saturday 03 January 2026 00:27:40 +0000 (0:00:00.236) 0:01:06.181 ****** 2026-01-03 00:27:41.484735 | orchestrator | ok: [testbed-manager] 2026-01-03 00:27:41.484746 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:27:41.484756 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:27:41.484767 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:27:41.484777 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:27:41.484788 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:27:41.484798 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:27:41.484809 | orchestrator | 2026-01-03 00:27:41.484819 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-01-03 00:27:41.484830 | orchestrator | Saturday 03 January 2026 00:27:41 +0000 (0:00:00.228) 0:01:06.410 ****** 2026-01-03 00:27:41.484842 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:27:41.484853 | orchestrator | 2026-01-03 00:27:41.484875 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-01-03 00:30:06.553866 | orchestrator | Saturday 03 January 2026 00:27:41 +0000 (0:00:00.264) 0:01:06.674 ****** 2026-01-03 00:30:06.553986 | orchestrator | ok: [testbed-manager] 2026-01-03 00:30:06.554005 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:30:06.554103 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:30:06.554129 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:30:06.554140 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:30:06.554151 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:30:06.554163 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:30:06.554174 | orchestrator | 2026-01-03 00:30:06.554186 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-01-03 00:30:06.554198 | orchestrator | Saturday 03 January 2026 00:27:43 +0000 (0:00:02.383) 0:01:09.058 ****** 2026-01-03 00:30:06.554209 | orchestrator | changed: [testbed-manager] 2026-01-03 00:30:06.554221 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:30:06.554232 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:30:06.554243 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:30:06.554254 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:30:06.554265 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:30:06.554303 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:30:06.554315 | orchestrator | 2026-01-03 00:30:06.554327 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-01-03 00:30:06.554376 | orchestrator | Saturday 03 January 2026 00:27:44 +0000 (0:00:00.681) 0:01:09.740 ****** 2026-01-03 00:30:06.554396 | orchestrator | ok: [testbed-manager] 2026-01-03 00:30:06.554415 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:30:06.554435 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:30:06.554454 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:30:06.554474 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:30:06.554493 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:30:06.554511 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:30:06.554524 | orchestrator | 2026-01-03 00:30:06.554537 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-01-03 00:30:06.554550 | orchestrator | Saturday 03 January 2026 00:27:44 +0000 (0:00:00.212) 0:01:09.953 ****** 2026-01-03 00:30:06.554563 | orchestrator | ok: [testbed-manager] 2026-01-03 00:30:06.554576 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:30:06.554589 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:30:06.554602 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:30:06.554615 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:30:06.554627 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:30:06.554641 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:30:06.554654 | orchestrator | 2026-01-03 00:30:06.554667 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-01-03 00:30:06.554680 | orchestrator | Saturday 03 January 2026 00:27:45 +0000 (0:00:01.242) 0:01:11.195 ****** 2026-01-03 00:30:06.554694 | orchestrator | changed: [testbed-manager] 2026-01-03 00:30:06.554707 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:30:06.554720 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:30:06.554733 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:30:06.554744 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:30:06.554755 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:30:06.554766 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:30:06.554777 | orchestrator | 2026-01-03 00:30:06.554788 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-01-03 00:30:06.554799 | orchestrator | Saturday 03 January 2026 00:27:48 +0000 (0:00:02.464) 0:01:13.660 ****** 2026-01-03 00:30:06.554810 | orchestrator | ok: [testbed-manager] 2026-01-03 00:30:06.554821 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:30:06.554831 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:30:06.554842 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:30:06.554852 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:30:06.554863 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:30:06.554873 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:30:06.554884 | orchestrator | 2026-01-03 00:30:06.554895 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-01-03 00:30:06.554906 | orchestrator | Saturday 03 January 2026 00:27:51 +0000 (0:00:03.069) 0:01:16.729 ****** 2026-01-03 00:30:06.554917 | orchestrator | ok: [testbed-manager] 2026-01-03 00:30:06.554928 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:30:06.554938 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:30:06.554949 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:30:06.554959 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:30:06.554970 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:30:06.554980 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:30:06.554991 | orchestrator | 2026-01-03 00:30:06.555002 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-01-03 00:30:06.555012 | orchestrator | Saturday 03 January 2026 00:28:29 +0000 (0:00:38.029) 0:01:54.758 ****** 2026-01-03 00:30:06.555023 | orchestrator | changed: [testbed-manager] 2026-01-03 00:30:06.555034 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:30:06.555045 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:30:06.555056 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:30:06.555067 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:30:06.555077 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:30:06.555088 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:30:06.555099 | orchestrator | 2026-01-03 00:30:06.555120 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-01-03 00:30:06.555131 | orchestrator | Saturday 03 January 2026 00:29:50 +0000 (0:01:20.912) 0:03:15.671 ****** 2026-01-03 00:30:06.555142 | orchestrator | ok: [testbed-manager] 2026-01-03 00:30:06.555154 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:30:06.555172 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:30:06.555200 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:30:06.555219 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:30:06.555237 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:30:06.555254 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:30:06.555270 | orchestrator | 2026-01-03 00:30:06.555316 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-01-03 00:30:06.555332 | orchestrator | Saturday 03 January 2026 00:29:52 +0000 (0:00:02.053) 0:03:17.725 ****** 2026-01-03 00:30:06.555350 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:30:06.555368 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:30:06.555385 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:30:06.555402 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:30:06.555420 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:30:06.555438 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:30:06.555456 | orchestrator | changed: [testbed-manager] 2026-01-03 00:30:06.555474 | orchestrator | 2026-01-03 00:30:06.555492 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-01-03 00:30:06.555512 | orchestrator | Saturday 03 January 2026 00:30:05 +0000 (0:00:12.698) 0:03:30.423 ****** 2026-01-03 00:30:06.555583 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-01-03 00:30:06.555621 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-01-03 00:30:06.555647 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-01-03 00:30:06.555669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-03 00:30:06.555690 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-03 00:30:06.555708 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-01-03 00:30:06.555736 | orchestrator | 2026-01-03 00:30:06.555748 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-01-03 00:30:06.555759 | orchestrator | Saturday 03 January 2026 00:30:05 +0000 (0:00:00.378) 0:03:30.801 ****** 2026-01-03 00:30:06.555770 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-03 00:30:06.555786 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-03 00:30:06.555797 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:30:06.555808 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:30:06.555819 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-03 00:30:06.555830 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-03 00:30:06.555840 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:30:06.555851 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:30:06.555862 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-03 00:30:06.555873 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-03 00:30:06.555884 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-03 00:30:06.555895 | orchestrator | 2026-01-03 00:30:06.555906 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-01-03 00:30:06.555929 | orchestrator | Saturday 03 January 2026 00:30:06 +0000 (0:00:00.804) 0:03:31.605 ****** 2026-01-03 00:30:06.555940 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-03 00:30:06.555952 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-03 00:30:06.555963 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-03 00:30:06.555974 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-03 00:30:06.555985 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-03 00:30:06.556012 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-03 00:30:15.720855 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-03 00:30:15.720940 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-03 00:30:15.720953 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-03 00:30:15.720961 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-03 00:30:15.720967 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-03 00:30:15.720974 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-03 00:30:15.720980 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-03 00:30:15.720986 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-03 00:30:15.720992 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-03 00:30:15.721000 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-03 00:30:15.721007 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-03 00:30:15.721012 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-03 00:30:15.721018 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-03 00:30:15.721045 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-03 00:30:15.721051 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-03 00:30:15.721056 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-03 00:30:15.721062 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-03 00:30:15.721067 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-03 00:30:15.721072 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-03 00:30:15.721078 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-03 00:30:15.721083 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-03 00:30:15.721089 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-03 00:30:15.721094 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-03 00:30:15.721099 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-03 00:30:15.721105 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:30:15.721112 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:30:15.721118 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-03 00:30:15.721123 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-03 00:30:15.721129 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-03 00:30:15.721134 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-03 00:30:15.721140 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-03 00:30:15.721146 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-03 00:30:15.721151 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-03 00:30:15.721157 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-03 00:30:15.721164 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-03 00:30:15.721170 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-03 00:30:15.721175 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:30:15.721181 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:30:15.721187 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-03 00:30:15.721193 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-03 00:30:15.721198 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-03 00:30:15.721204 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-03 00:30:15.721211 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-03 00:30:15.721247 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-03 00:30:15.721254 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-03 00:30:15.721262 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-03 00:30:15.721268 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-03 00:30:15.721326 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-03 00:30:15.721332 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-03 00:30:15.721335 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-03 00:30:15.721339 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-03 00:30:15.721343 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-03 00:30:15.721347 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-03 00:30:15.721350 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-03 00:30:15.721354 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-03 00:30:15.721358 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-03 00:30:15.721362 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-03 00:30:15.721366 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-03 00:30:15.721369 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-03 00:30:15.721373 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-03 00:30:15.721377 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-03 00:30:15.721381 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-03 00:30:15.721385 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-03 00:30:15.721388 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-03 00:30:15.721392 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-03 00:30:15.721396 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-03 00:30:15.721399 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-03 00:30:15.721403 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-03 00:30:15.721408 | orchestrator | 2026-01-03 00:30:15.721412 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-01-03 00:30:15.721417 | orchestrator | Saturday 03 January 2026 00:30:12 +0000 (0:00:06.106) 0:03:37.712 ****** 2026-01-03 00:30:15.721421 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-03 00:30:15.721426 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-03 00:30:15.721430 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-03 00:30:15.721435 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-03 00:30:15.721439 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-03 00:30:15.721444 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-03 00:30:15.721450 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-03 00:30:15.721456 | orchestrator | 2026-01-03 00:30:15.721462 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-01-03 00:30:15.721468 | orchestrator | Saturday 03 January 2026 00:30:14 +0000 (0:00:01.635) 0:03:39.348 ****** 2026-01-03 00:30:15.721474 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-03 00:30:15.721485 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:30:15.721491 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-03 00:30:15.721498 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-03 00:30:15.721505 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:30:15.721512 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:30:15.721518 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-03 00:30:15.721524 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:30:15.721531 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-03 00:30:15.721540 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-03 00:30:15.721549 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-03 00:30:29.671129 | orchestrator | 2026-01-03 00:30:29.671230 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-01-03 00:30:29.671243 | orchestrator | Saturday 03 January 2026 00:30:15 +0000 (0:00:01.561) 0:03:40.910 ****** 2026-01-03 00:30:29.671251 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-03 00:30:29.671259 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-03 00:30:29.671266 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:30:29.671274 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-03 00:30:29.671281 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:30:29.671334 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-03 00:30:29.671342 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:30:29.671349 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:30:29.671356 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-03 00:30:29.671363 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-03 00:30:29.671369 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-03 00:30:29.671376 | orchestrator | 2026-01-03 00:30:29.671383 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-01-03 00:30:29.671390 | orchestrator | Saturday 03 January 2026 00:30:16 +0000 (0:00:00.623) 0:03:41.533 ****** 2026-01-03 00:30:29.671397 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-03 00:30:29.671404 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:30:29.671411 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-03 00:30:29.671418 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-03 00:30:29.671425 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:30:29.671432 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:30:29.671438 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-03 00:30:29.671445 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:30:29.671452 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-03 00:30:29.671459 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-03 00:30:29.671465 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-03 00:30:29.671495 | orchestrator | 2026-01-03 00:30:29.671501 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-01-03 00:30:29.671508 | orchestrator | Saturday 03 January 2026 00:30:17 +0000 (0:00:01.592) 0:03:43.126 ****** 2026-01-03 00:30:29.671514 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:30:29.671521 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:30:29.671527 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:30:29.671533 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:30:29.671539 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:30:29.671546 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:30:29.671553 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:30:29.671559 | orchestrator | 2026-01-03 00:30:29.671565 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-01-03 00:30:29.671571 | orchestrator | Saturday 03 January 2026 00:30:18 +0000 (0:00:00.261) 0:03:43.387 ****** 2026-01-03 00:30:29.671576 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:30:29.671583 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:30:29.671590 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:30:29.671596 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:30:29.671603 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:30:29.671609 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:30:29.671614 | orchestrator | ok: [testbed-manager] 2026-01-03 00:30:29.671620 | orchestrator | 2026-01-03 00:30:29.671626 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-01-03 00:30:29.671632 | orchestrator | Saturday 03 January 2026 00:30:23 +0000 (0:00:05.100) 0:03:48.487 ****** 2026-01-03 00:30:29.671638 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-01-03 00:30:29.671645 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-01-03 00:30:29.671651 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:30:29.671658 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:30:29.671664 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-01-03 00:30:29.671670 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-01-03 00:30:29.671677 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:30:29.671683 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-01-03 00:30:29.671689 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:30:29.671695 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-01-03 00:30:29.671701 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:30:29.671706 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:30:29.671712 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-01-03 00:30:29.671718 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:30:29.671725 | orchestrator | 2026-01-03 00:30:29.671731 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-01-03 00:30:29.671737 | orchestrator | Saturday 03 January 2026 00:30:23 +0000 (0:00:00.313) 0:03:48.801 ****** 2026-01-03 00:30:29.671744 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-01-03 00:30:29.671751 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-01-03 00:30:29.671757 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-01-03 00:30:29.671780 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-01-03 00:30:29.671787 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-01-03 00:30:29.671793 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-01-03 00:30:29.671800 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-01-03 00:30:29.671806 | orchestrator | 2026-01-03 00:30:29.671812 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-01-03 00:30:29.671819 | orchestrator | Saturday 03 January 2026 00:30:24 +0000 (0:00:01.174) 0:03:49.976 ****** 2026-01-03 00:30:29.671827 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:30:29.671837 | orchestrator | 2026-01-03 00:30:29.671844 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-01-03 00:30:29.671861 | orchestrator | Saturday 03 January 2026 00:30:25 +0000 (0:00:00.481) 0:03:50.457 ****** 2026-01-03 00:30:29.671868 | orchestrator | ok: [testbed-manager] 2026-01-03 00:30:29.671875 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:30:29.671882 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:30:29.671889 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:30:29.671896 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:30:29.671903 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:30:29.671910 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:30:29.671917 | orchestrator | 2026-01-03 00:30:29.671925 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-01-03 00:30:29.671932 | orchestrator | Saturday 03 January 2026 00:30:26 +0000 (0:00:01.402) 0:03:51.860 ****** 2026-01-03 00:30:29.671939 | orchestrator | ok: [testbed-manager] 2026-01-03 00:30:29.671946 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:30:29.671953 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:30:29.671959 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:30:29.671966 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:30:29.671972 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:30:29.671979 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:30:29.671986 | orchestrator | 2026-01-03 00:30:29.671993 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-01-03 00:30:29.672000 | orchestrator | Saturday 03 January 2026 00:30:27 +0000 (0:00:00.681) 0:03:52.541 ****** 2026-01-03 00:30:29.672007 | orchestrator | changed: [testbed-manager] 2026-01-03 00:30:29.672013 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:30:29.672019 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:30:29.672026 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:30:29.672033 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:30:29.672040 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:30:29.672047 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:30:29.672054 | orchestrator | 2026-01-03 00:30:29.672077 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-01-03 00:30:29.672084 | orchestrator | Saturday 03 January 2026 00:30:27 +0000 (0:00:00.625) 0:03:53.167 ****** 2026-01-03 00:30:29.672090 | orchestrator | ok: [testbed-manager] 2026-01-03 00:30:29.672095 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:30:29.672101 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:30:29.672107 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:30:29.672112 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:30:29.672118 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:30:29.672123 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:30:29.672129 | orchestrator | 2026-01-03 00:30:29.672135 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-01-03 00:30:29.672141 | orchestrator | Saturday 03 January 2026 00:30:28 +0000 (0:00:00.690) 0:03:53.857 ****** 2026-01-03 00:30:29.672152 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767398736.328008, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:30:29.672161 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767398739.953966, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:30:29.672178 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767398739.5504482, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:30:29.672194 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767398738.1539779, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:30:34.529825 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767398735.291478, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:30:34.529898 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767398749.084166, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:30:34.529905 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767398760.1342723, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:30:34.529910 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:30:34.529914 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:30:34.529934 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:30:34.529950 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:30:34.529972 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:30:34.529977 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:30:34.529981 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 00:30:34.529986 | orchestrator | 2026-01-03 00:30:34.529991 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-01-03 00:30:34.529997 | orchestrator | Saturday 03 January 2026 00:30:29 +0000 (0:00:01.002) 0:03:54.860 ****** 2026-01-03 00:30:34.530001 | orchestrator | changed: [testbed-manager] 2026-01-03 00:30:34.530006 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:30:34.530010 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:30:34.530053 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:30:34.530057 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:30:34.530061 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:30:34.530065 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:30:34.530070 | orchestrator | 2026-01-03 00:30:34.530074 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-01-03 00:30:34.530078 | orchestrator | Saturday 03 January 2026 00:30:30 +0000 (0:00:01.126) 0:03:55.986 ****** 2026-01-03 00:30:34.530082 | orchestrator | changed: [testbed-manager] 2026-01-03 00:30:34.530087 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:30:34.530091 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:30:34.530099 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:30:34.530103 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:30:34.530107 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:30:34.530112 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:30:34.530116 | orchestrator | 2026-01-03 00:30:34.530120 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-01-03 00:30:34.530124 | orchestrator | Saturday 03 January 2026 00:30:31 +0000 (0:00:01.169) 0:03:57.156 ****** 2026-01-03 00:30:34.530128 | orchestrator | changed: [testbed-manager] 2026-01-03 00:30:34.530132 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:30:34.530136 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:30:34.530140 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:30:34.530144 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:30:34.530148 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:30:34.530152 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:30:34.530156 | orchestrator | 2026-01-03 00:30:34.530160 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-01-03 00:30:34.530164 | orchestrator | Saturday 03 January 2026 00:30:33 +0000 (0:00:01.161) 0:03:58.317 ****** 2026-01-03 00:30:34.530168 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:30:34.530172 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:30:34.530176 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:30:34.530180 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:30:34.530185 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:30:34.530189 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:30:34.530193 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:30:34.530197 | orchestrator | 2026-01-03 00:30:34.530201 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-01-03 00:30:34.530205 | orchestrator | Saturday 03 January 2026 00:30:33 +0000 (0:00:00.263) 0:03:58.581 ****** 2026-01-03 00:30:34.530209 | orchestrator | ok: [testbed-manager] 2026-01-03 00:30:34.530214 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:30:34.530219 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:30:34.530225 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:30:34.530230 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:30:34.530234 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:30:34.530238 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:30:34.530242 | orchestrator | 2026-01-03 00:30:34.530246 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-01-03 00:30:34.530250 | orchestrator | Saturday 03 January 2026 00:30:34 +0000 (0:00:00.762) 0:03:59.344 ****** 2026-01-03 00:30:34.530255 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:30:34.530261 | orchestrator | 2026-01-03 00:30:34.530265 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-01-03 00:30:34.530273 | orchestrator | Saturday 03 January 2026 00:30:34 +0000 (0:00:00.381) 0:03:59.726 ****** 2026-01-03 00:31:52.337159 | orchestrator | ok: [testbed-manager] 2026-01-03 00:31:52.337308 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:31:52.337338 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:31:52.337358 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:31:52.337369 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:31:52.337379 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:31:52.337389 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:31:52.337399 | orchestrator | 2026-01-03 00:31:52.337411 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-01-03 00:31:52.337422 | orchestrator | Saturday 03 January 2026 00:30:44 +0000 (0:00:10.183) 0:04:09.909 ****** 2026-01-03 00:31:52.337432 | orchestrator | ok: [testbed-manager] 2026-01-03 00:31:52.337442 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:31:52.337452 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:31:52.337462 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:31:52.337496 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:31:52.337506 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:31:52.337516 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:31:52.337525 | orchestrator | 2026-01-03 00:31:52.337535 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-01-03 00:31:52.337545 | orchestrator | Saturday 03 January 2026 00:30:45 +0000 (0:00:01.280) 0:04:11.190 ****** 2026-01-03 00:31:52.337554 | orchestrator | ok: [testbed-manager] 2026-01-03 00:31:52.337564 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:31:52.337573 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:31:52.337582 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:31:52.337592 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:31:52.337601 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:31:52.337610 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:31:52.337620 | orchestrator | 2026-01-03 00:31:52.337629 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-01-03 00:31:52.337639 | orchestrator | Saturday 03 January 2026 00:30:47 +0000 (0:00:01.101) 0:04:12.291 ****** 2026-01-03 00:31:52.337649 | orchestrator | ok: [testbed-manager] 2026-01-03 00:31:52.337659 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:31:52.337668 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:31:52.337677 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:31:52.337687 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:31:52.337696 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:31:52.337707 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:31:52.337718 | orchestrator | 2026-01-03 00:31:52.337730 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-01-03 00:31:52.337821 | orchestrator | Saturday 03 January 2026 00:30:47 +0000 (0:00:00.286) 0:04:12.578 ****** 2026-01-03 00:31:52.337833 | orchestrator | ok: [testbed-manager] 2026-01-03 00:31:52.337844 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:31:52.337855 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:31:52.337866 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:31:52.337876 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:31:52.337887 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:31:52.337898 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:31:52.337910 | orchestrator | 2026-01-03 00:31:52.337921 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-01-03 00:31:52.337933 | orchestrator | Saturday 03 January 2026 00:30:47 +0000 (0:00:00.311) 0:04:12.890 ****** 2026-01-03 00:31:52.337945 | orchestrator | ok: [testbed-manager] 2026-01-03 00:31:52.337955 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:31:52.337966 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:31:52.337976 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:31:52.337987 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:31:52.337999 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:31:52.338009 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:31:52.338058 | orchestrator | 2026-01-03 00:31:52.338068 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-01-03 00:31:52.338078 | orchestrator | Saturday 03 January 2026 00:30:47 +0000 (0:00:00.270) 0:04:13.160 ****** 2026-01-03 00:31:52.338088 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:31:52.338098 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:31:52.338107 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:31:52.338117 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:31:52.338127 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:31:52.338136 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:31:52.338146 | orchestrator | ok: [testbed-manager] 2026-01-03 00:31:52.338156 | orchestrator | 2026-01-03 00:31:52.338166 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-01-03 00:31:52.338176 | orchestrator | Saturday 03 January 2026 00:30:52 +0000 (0:00:04.581) 0:04:17.741 ****** 2026-01-03 00:31:52.338187 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:31:52.338211 | orchestrator | 2026-01-03 00:31:52.338221 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-01-03 00:31:52.338230 | orchestrator | Saturday 03 January 2026 00:30:52 +0000 (0:00:00.369) 0:04:18.111 ****** 2026-01-03 00:31:52.338240 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-01-03 00:31:52.338250 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-01-03 00:31:52.338260 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-01-03 00:31:52.338271 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-01-03 00:31:52.338280 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:31:52.338307 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-01-03 00:31:52.338317 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:31:52.338327 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-01-03 00:31:52.338337 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-01-03 00:31:52.338346 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:31:52.338356 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-01-03 00:31:52.338366 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-01-03 00:31:52.338375 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-01-03 00:31:52.338385 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:31:52.338395 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-01-03 00:31:52.338405 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-01-03 00:31:52.338432 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:31:52.338442 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:31:52.338452 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-01-03 00:31:52.338461 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-01-03 00:31:52.338471 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:31:52.338481 | orchestrator | 2026-01-03 00:31:52.338490 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-01-03 00:31:52.338500 | orchestrator | Saturday 03 January 2026 00:30:53 +0000 (0:00:00.334) 0:04:18.445 ****** 2026-01-03 00:31:52.338510 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:31:52.338520 | orchestrator | 2026-01-03 00:31:52.338530 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-01-03 00:31:52.338540 | orchestrator | Saturday 03 January 2026 00:30:53 +0000 (0:00:00.320) 0:04:18.766 ****** 2026-01-03 00:31:52.338549 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-01-03 00:31:52.338559 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-01-03 00:31:52.338569 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:31:52.338579 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-01-03 00:31:52.338588 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:31:52.338598 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:31:52.338608 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-01-03 00:31:52.338617 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:31:52.338627 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-01-03 00:31:52.338636 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-01-03 00:31:52.338646 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:31:52.338656 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:31:52.338666 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-01-03 00:31:52.338675 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:31:52.338685 | orchestrator | 2026-01-03 00:31:52.338713 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-01-03 00:31:52.338733 | orchestrator | Saturday 03 January 2026 00:30:53 +0000 (0:00:00.248) 0:04:19.015 ****** 2026-01-03 00:31:52.338743 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:31:52.338753 | orchestrator | 2026-01-03 00:31:52.338763 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-01-03 00:31:52.338773 | orchestrator | Saturday 03 January 2026 00:30:54 +0000 (0:00:00.362) 0:04:19.378 ****** 2026-01-03 00:31:52.338782 | orchestrator | changed: [testbed-manager] 2026-01-03 00:31:52.338792 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:31:52.338802 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:31:52.338811 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:31:52.338821 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:31:52.338831 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:31:52.338840 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:31:52.338850 | orchestrator | 2026-01-03 00:31:52.338859 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-01-03 00:31:52.338869 | orchestrator | Saturday 03 January 2026 00:31:27 +0000 (0:00:32.901) 0:04:52.279 ****** 2026-01-03 00:31:52.338879 | orchestrator | changed: [testbed-manager] 2026-01-03 00:31:52.338888 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:31:52.338898 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:31:52.338907 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:31:52.338917 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:31:52.338926 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:31:52.338936 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:31:52.338945 | orchestrator | 2026-01-03 00:31:52.338955 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-01-03 00:31:52.338965 | orchestrator | Saturday 03 January 2026 00:31:35 +0000 (0:00:08.339) 0:05:00.618 ****** 2026-01-03 00:31:52.338974 | orchestrator | changed: [testbed-manager] 2026-01-03 00:31:52.338984 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:31:52.338993 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:31:52.339003 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:31:52.339012 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:31:52.339022 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:31:52.339031 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:31:52.339041 | orchestrator | 2026-01-03 00:31:52.339050 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-01-03 00:31:52.339065 | orchestrator | Saturday 03 January 2026 00:31:43 +0000 (0:00:08.133) 0:05:08.752 ****** 2026-01-03 00:31:52.339075 | orchestrator | ok: [testbed-manager] 2026-01-03 00:31:52.339085 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:31:52.339095 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:31:52.339105 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:31:52.339114 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:31:52.339124 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:31:52.339133 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:31:52.339143 | orchestrator | 2026-01-03 00:31:52.339152 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-01-03 00:31:52.339162 | orchestrator | Saturday 03 January 2026 00:31:45 +0000 (0:00:01.892) 0:05:10.644 ****** 2026-01-03 00:31:52.339172 | orchestrator | changed: [testbed-manager] 2026-01-03 00:31:52.339182 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:31:52.339192 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:31:52.339201 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:31:52.339211 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:31:52.339220 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:31:52.339230 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:31:52.339240 | orchestrator | 2026-01-03 00:31:52.339256 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-01-03 00:32:03.818448 | orchestrator | Saturday 03 January 2026 00:31:52 +0000 (0:00:06.877) 0:05:17.522 ****** 2026-01-03 00:32:03.818564 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:32:03.818588 | orchestrator | 2026-01-03 00:32:03.818605 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-01-03 00:32:03.818615 | orchestrator | Saturday 03 January 2026 00:31:52 +0000 (0:00:00.504) 0:05:18.027 ****** 2026-01-03 00:32:03.818627 | orchestrator | changed: [testbed-manager] 2026-01-03 00:32:03.818644 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:32:03.818659 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:32:03.818676 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:32:03.818690 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:32:03.818705 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:32:03.818714 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:32:03.818723 | orchestrator | 2026-01-03 00:32:03.818732 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-01-03 00:32:03.818741 | orchestrator | Saturday 03 January 2026 00:31:53 +0000 (0:00:00.828) 0:05:18.855 ****** 2026-01-03 00:32:03.818750 | orchestrator | ok: [testbed-manager] 2026-01-03 00:32:03.818760 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:32:03.818769 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:32:03.818777 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:32:03.818786 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:32:03.818794 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:32:03.818803 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:32:03.818811 | orchestrator | 2026-01-03 00:32:03.818820 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-01-03 00:32:03.818829 | orchestrator | Saturday 03 January 2026 00:31:55 +0000 (0:00:02.109) 0:05:20.965 ****** 2026-01-03 00:32:03.818838 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:32:03.818846 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:32:03.818855 | orchestrator | changed: [testbed-manager] 2026-01-03 00:32:03.818863 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:32:03.818872 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:32:03.818880 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:32:03.818889 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:32:03.818898 | orchestrator | 2026-01-03 00:32:03.818906 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-01-03 00:32:03.818915 | orchestrator | Saturday 03 January 2026 00:31:56 +0000 (0:00:00.835) 0:05:21.800 ****** 2026-01-03 00:32:03.818924 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:32:03.818932 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:32:03.818941 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:32:03.818949 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:32:03.818958 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:32:03.818966 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:32:03.818975 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:32:03.818986 | orchestrator | 2026-01-03 00:32:03.818996 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-01-03 00:32:03.819007 | orchestrator | Saturday 03 January 2026 00:31:56 +0000 (0:00:00.272) 0:05:22.072 ****** 2026-01-03 00:32:03.819017 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:32:03.819027 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:32:03.819037 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:32:03.819048 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:32:03.819058 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:32:03.819067 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:32:03.819077 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:32:03.819087 | orchestrator | 2026-01-03 00:32:03.819097 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-01-03 00:32:03.819133 | orchestrator | Saturday 03 January 2026 00:31:57 +0000 (0:00:00.378) 0:05:22.451 ****** 2026-01-03 00:32:03.819144 | orchestrator | ok: [testbed-manager] 2026-01-03 00:32:03.819154 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:32:03.819164 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:32:03.819175 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:32:03.819185 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:32:03.819195 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:32:03.819205 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:32:03.819215 | orchestrator | 2026-01-03 00:32:03.819226 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-01-03 00:32:03.819236 | orchestrator | Saturday 03 January 2026 00:31:57 +0000 (0:00:00.283) 0:05:22.734 ****** 2026-01-03 00:32:03.819246 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:32:03.819256 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:32:03.819267 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:32:03.819277 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:32:03.819315 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:32:03.819326 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:32:03.819336 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:32:03.819350 | orchestrator | 2026-01-03 00:32:03.819365 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-01-03 00:32:03.819399 | orchestrator | Saturday 03 January 2026 00:31:57 +0000 (0:00:00.266) 0:05:23.001 ****** 2026-01-03 00:32:03.819414 | orchestrator | ok: [testbed-manager] 2026-01-03 00:32:03.819424 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:32:03.819433 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:32:03.819442 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:32:03.819450 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:32:03.819459 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:32:03.819467 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:32:03.819476 | orchestrator | 2026-01-03 00:32:03.819484 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-01-03 00:32:03.819493 | orchestrator | Saturday 03 January 2026 00:31:58 +0000 (0:00:00.280) 0:05:23.281 ****** 2026-01-03 00:32:03.819502 | orchestrator | ok: [testbed-manager] =>  2026-01-03 00:32:03.819510 | orchestrator |  docker_version: 5:27.5.1 2026-01-03 00:32:03.819519 | orchestrator | ok: [testbed-node-3] =>  2026-01-03 00:32:03.819528 | orchestrator |  docker_version: 5:27.5.1 2026-01-03 00:32:03.819543 | orchestrator | ok: [testbed-node-4] =>  2026-01-03 00:32:03.819559 | orchestrator |  docker_version: 5:27.5.1 2026-01-03 00:32:03.819573 | orchestrator | ok: [testbed-node-5] =>  2026-01-03 00:32:03.819588 | orchestrator |  docker_version: 5:27.5.1 2026-01-03 00:32:03.819619 | orchestrator | ok: [testbed-node-0] =>  2026-01-03 00:32:03.819635 | orchestrator |  docker_version: 5:27.5.1 2026-01-03 00:32:03.819651 | orchestrator | ok: [testbed-node-1] =>  2026-01-03 00:32:03.819665 | orchestrator |  docker_version: 5:27.5.1 2026-01-03 00:32:03.819678 | orchestrator | ok: [testbed-node-2] =>  2026-01-03 00:32:03.819687 | orchestrator |  docker_version: 5:27.5.1 2026-01-03 00:32:03.819696 | orchestrator | 2026-01-03 00:32:03.819705 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-01-03 00:32:03.819713 | orchestrator | Saturday 03 January 2026 00:31:58 +0000 (0:00:00.260) 0:05:23.541 ****** 2026-01-03 00:32:03.819722 | orchestrator | ok: [testbed-manager] =>  2026-01-03 00:32:03.819731 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-03 00:32:03.819739 | orchestrator | ok: [testbed-node-3] =>  2026-01-03 00:32:03.819748 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-03 00:32:03.819757 | orchestrator | ok: [testbed-node-4] =>  2026-01-03 00:32:03.819765 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-03 00:32:03.819774 | orchestrator | ok: [testbed-node-5] =>  2026-01-03 00:32:03.819782 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-03 00:32:03.819791 | orchestrator | ok: [testbed-node-0] =>  2026-01-03 00:32:03.819799 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-03 00:32:03.819808 | orchestrator | ok: [testbed-node-1] =>  2026-01-03 00:32:03.819825 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-03 00:32:03.819834 | orchestrator | ok: [testbed-node-2] =>  2026-01-03 00:32:03.819842 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-03 00:32:03.819851 | orchestrator | 2026-01-03 00:32:03.819860 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-01-03 00:32:03.819868 | orchestrator | Saturday 03 January 2026 00:31:58 +0000 (0:00:00.302) 0:05:23.843 ****** 2026-01-03 00:32:03.819877 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:32:03.819886 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:32:03.819894 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:32:03.819903 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:32:03.819911 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:32:03.819920 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:32:03.819929 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:32:03.819937 | orchestrator | 2026-01-03 00:32:03.819946 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-01-03 00:32:03.819954 | orchestrator | Saturday 03 January 2026 00:31:58 +0000 (0:00:00.265) 0:05:24.108 ****** 2026-01-03 00:32:03.819963 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:32:03.819972 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:32:03.819980 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:32:03.819989 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:32:03.820004 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:32:03.820019 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:32:03.820033 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:32:03.820048 | orchestrator | 2026-01-03 00:32:03.820062 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-01-03 00:32:03.820077 | orchestrator | Saturday 03 January 2026 00:31:59 +0000 (0:00:00.280) 0:05:24.389 ****** 2026-01-03 00:32:03.820088 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:32:03.820098 | orchestrator | 2026-01-03 00:32:03.820107 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-01-03 00:32:03.820116 | orchestrator | Saturday 03 January 2026 00:31:59 +0000 (0:00:00.379) 0:05:24.769 ****** 2026-01-03 00:32:03.820124 | orchestrator | ok: [testbed-manager] 2026-01-03 00:32:03.820133 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:32:03.820142 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:32:03.820151 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:32:03.820159 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:32:03.820168 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:32:03.820176 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:32:03.820185 | orchestrator | 2026-01-03 00:32:03.820194 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-01-03 00:32:03.820202 | orchestrator | Saturday 03 January 2026 00:32:00 +0000 (0:00:01.011) 0:05:25.780 ****** 2026-01-03 00:32:03.820211 | orchestrator | ok: [testbed-manager] 2026-01-03 00:32:03.820220 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:32:03.820228 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:32:03.820237 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:32:03.820245 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:32:03.820255 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:32:03.820270 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:32:03.820351 | orchestrator | 2026-01-03 00:32:03.820367 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-01-03 00:32:03.820378 | orchestrator | Saturday 03 January 2026 00:32:03 +0000 (0:00:02.841) 0:05:28.622 ****** 2026-01-03 00:32:03.820387 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-01-03 00:32:03.820396 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-01-03 00:32:03.820405 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-01-03 00:32:03.820419 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-01-03 00:32:03.820434 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-01-03 00:32:03.820443 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-01-03 00:32:03.820455 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:32:03.820468 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-01-03 00:32:03.820490 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:32:03.820505 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-01-03 00:32:03.820518 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-01-03 00:32:03.820532 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-01-03 00:32:03.820545 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-01-03 00:32:03.820559 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-01-03 00:32:03.820575 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:32:03.820590 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-01-03 00:32:03.820617 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-01-03 00:33:08.399201 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-01-03 00:33:08.399335 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:33:08.399349 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-01-03 00:33:08.399358 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-01-03 00:33:08.399366 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-01-03 00:33:08.399373 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:33:08.399381 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:33:08.399388 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-01-03 00:33:08.399396 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-01-03 00:33:08.399403 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-01-03 00:33:08.399410 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:33:08.399418 | orchestrator | 2026-01-03 00:33:08.399426 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-01-03 00:33:08.399434 | orchestrator | Saturday 03 January 2026 00:32:03 +0000 (0:00:00.575) 0:05:29.197 ****** 2026-01-03 00:33:08.399442 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:08.399450 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:08.399457 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:08.399464 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:08.399472 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:08.399479 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:08.399486 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:08.399493 | orchestrator | 2026-01-03 00:33:08.399501 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-01-03 00:33:08.399508 | orchestrator | Saturday 03 January 2026 00:32:11 +0000 (0:00:07.371) 0:05:36.569 ****** 2026-01-03 00:33:08.399515 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:08.399523 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:08.399530 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:08.399537 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:08.399544 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:08.399552 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:08.399559 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:08.399566 | orchestrator | 2026-01-03 00:33:08.399573 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-01-03 00:33:08.399581 | orchestrator | Saturday 03 January 2026 00:32:12 +0000 (0:00:01.076) 0:05:37.646 ****** 2026-01-03 00:33:08.399588 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:08.399595 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:08.399603 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:08.399610 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:08.399617 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:08.399624 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:08.399655 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:08.399663 | orchestrator | 2026-01-03 00:33:08.399670 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-01-03 00:33:08.399678 | orchestrator | Saturday 03 January 2026 00:32:21 +0000 (0:00:08.961) 0:05:46.607 ****** 2026-01-03 00:33:08.399685 | orchestrator | changed: [testbed-manager] 2026-01-03 00:33:08.399692 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:08.399699 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:08.399707 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:08.399714 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:08.399722 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:08.399729 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:08.399736 | orchestrator | 2026-01-03 00:33:08.399743 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-01-03 00:33:08.399752 | orchestrator | Saturday 03 January 2026 00:32:24 +0000 (0:00:03.485) 0:05:50.093 ****** 2026-01-03 00:33:08.399761 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:08.399770 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:08.399778 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:08.399787 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:08.399795 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:08.399804 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:08.399812 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:08.399821 | orchestrator | 2026-01-03 00:33:08.399829 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-01-03 00:33:08.399838 | orchestrator | Saturday 03 January 2026 00:32:26 +0000 (0:00:01.341) 0:05:51.434 ****** 2026-01-03 00:33:08.399847 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:08.399855 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:08.399864 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:08.399872 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:08.399881 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:08.399888 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:08.399895 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:08.399902 | orchestrator | 2026-01-03 00:33:08.399909 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-01-03 00:33:08.399916 | orchestrator | Saturday 03 January 2026 00:32:27 +0000 (0:00:01.509) 0:05:52.943 ****** 2026-01-03 00:33:08.399923 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:33:08.399931 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:33:08.399950 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:33:08.399958 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:33:08.399965 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:33:08.399972 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:33:08.399980 | orchestrator | changed: [testbed-manager] 2026-01-03 00:33:08.399987 | orchestrator | 2026-01-03 00:33:08.399994 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-01-03 00:33:08.400001 | orchestrator | Saturday 03 January 2026 00:32:28 +0000 (0:00:00.610) 0:05:53.554 ****** 2026-01-03 00:33:08.400009 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:08.400016 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:08.400023 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:08.400030 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:08.400037 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:08.400044 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:08.400055 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:08.400066 | orchestrator | 2026-01-03 00:33:08.400080 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-01-03 00:33:08.400106 | orchestrator | Saturday 03 January 2026 00:32:38 +0000 (0:00:10.510) 0:06:04.065 ****** 2026-01-03 00:33:08.400115 | orchestrator | changed: [testbed-manager] 2026-01-03 00:33:08.400122 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:08.400129 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:08.400143 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:08.400151 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:08.400158 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:08.400165 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:08.400172 | orchestrator | 2026-01-03 00:33:08.400179 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-01-03 00:33:08.400186 | orchestrator | Saturday 03 January 2026 00:32:39 +0000 (0:00:00.946) 0:06:05.011 ****** 2026-01-03 00:33:08.400194 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:08.400201 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:08.400208 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:08.400215 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:08.400222 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:08.400273 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:08.400281 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:08.400288 | orchestrator | 2026-01-03 00:33:08.400296 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-01-03 00:33:08.400303 | orchestrator | Saturday 03 January 2026 00:32:49 +0000 (0:00:09.970) 0:06:14.982 ****** 2026-01-03 00:33:08.400310 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:08.400318 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:08.400325 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:08.400332 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:08.400339 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:08.400346 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:08.400353 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:08.400361 | orchestrator | 2026-01-03 00:33:08.400368 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-01-03 00:33:08.400375 | orchestrator | Saturday 03 January 2026 00:33:01 +0000 (0:00:11.863) 0:06:26.845 ****** 2026-01-03 00:33:08.400383 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-01-03 00:33:08.400390 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-01-03 00:33:08.400397 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-01-03 00:33:08.400405 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-01-03 00:33:08.400412 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-01-03 00:33:08.400419 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-01-03 00:33:08.400426 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-01-03 00:33:08.400434 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-01-03 00:33:08.400441 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-01-03 00:33:08.400448 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-01-03 00:33:08.400456 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-01-03 00:33:08.400463 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-01-03 00:33:08.400470 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-01-03 00:33:08.400477 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-01-03 00:33:08.400485 | orchestrator | 2026-01-03 00:33:08.400492 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-01-03 00:33:08.400499 | orchestrator | Saturday 03 January 2026 00:33:02 +0000 (0:00:01.241) 0:06:28.087 ****** 2026-01-03 00:33:08.400506 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:33:08.400514 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:33:08.400521 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:33:08.400528 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:33:08.400535 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:33:08.400543 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:33:08.400550 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:33:08.400557 | orchestrator | 2026-01-03 00:33:08.400564 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-01-03 00:33:08.400572 | orchestrator | Saturday 03 January 2026 00:33:03 +0000 (0:00:00.520) 0:06:28.608 ****** 2026-01-03 00:33:08.400585 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:08.400593 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:08.400600 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:08.400607 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:08.400614 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:08.400622 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:08.400629 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:08.400636 | orchestrator | 2026-01-03 00:33:08.400643 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-01-03 00:33:08.400652 | orchestrator | Saturday 03 January 2026 00:33:07 +0000 (0:00:04.084) 0:06:32.693 ****** 2026-01-03 00:33:08.400659 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:33:08.400667 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:33:08.400674 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:33:08.400681 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:33:08.400688 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:33:08.400695 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:33:08.400703 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:33:08.400710 | orchestrator | 2026-01-03 00:33:08.400718 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-01-03 00:33:08.400725 | orchestrator | Saturday 03 January 2026 00:33:07 +0000 (0:00:00.477) 0:06:33.171 ****** 2026-01-03 00:33:08.400733 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-01-03 00:33:08.400741 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-01-03 00:33:08.400748 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:33:08.400755 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-01-03 00:33:08.400763 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-01-03 00:33:08.400770 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:33:08.400813 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-01-03 00:33:08.400822 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-01-03 00:33:08.400830 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:33:08.400843 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-01-03 00:33:27.615951 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-01-03 00:33:27.616093 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:33:27.616120 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-01-03 00:33:27.616140 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-01-03 00:33:27.616158 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:33:27.616177 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-01-03 00:33:27.616195 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-01-03 00:33:27.616252 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:33:27.616272 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-01-03 00:33:27.616290 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-01-03 00:33:27.616309 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:33:27.616327 | orchestrator | 2026-01-03 00:33:27.616347 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-01-03 00:33:27.616367 | orchestrator | Saturday 03 January 2026 00:33:08 +0000 (0:00:00.678) 0:06:33.850 ****** 2026-01-03 00:33:27.616384 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:33:27.616402 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:33:27.616420 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:33:27.616440 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:33:27.616462 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:33:27.616484 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:33:27.616506 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:33:27.616526 | orchestrator | 2026-01-03 00:33:27.616549 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-01-03 00:33:27.616610 | orchestrator | Saturday 03 January 2026 00:33:09 +0000 (0:00:00.505) 0:06:34.355 ****** 2026-01-03 00:33:27.616633 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:33:27.616655 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:33:27.616676 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:33:27.616698 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:33:27.616717 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:33:27.616740 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:33:27.616762 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:33:27.616783 | orchestrator | 2026-01-03 00:33:27.616805 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-01-03 00:33:27.616827 | orchestrator | Saturday 03 January 2026 00:33:09 +0000 (0:00:00.463) 0:06:34.818 ****** 2026-01-03 00:33:27.616848 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:33:27.616869 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:33:27.616888 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:33:27.616907 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:33:27.616927 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:33:27.616947 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:33:27.616964 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:33:27.616982 | orchestrator | 2026-01-03 00:33:27.617001 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-01-03 00:33:27.617018 | orchestrator | Saturday 03 January 2026 00:33:10 +0000 (0:00:00.492) 0:06:35.311 ****** 2026-01-03 00:33:27.617034 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:27.617051 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:33:27.617069 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:33:27.617085 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:33:27.617103 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:33:27.617120 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:33:27.617137 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:33:27.617154 | orchestrator | 2026-01-03 00:33:27.617172 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-01-03 00:33:27.617191 | orchestrator | Saturday 03 January 2026 00:33:12 +0000 (0:00:02.005) 0:06:37.317 ****** 2026-01-03 00:33:27.617239 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:33:27.617263 | orchestrator | 2026-01-03 00:33:27.617282 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-01-03 00:33:27.617301 | orchestrator | Saturday 03 January 2026 00:33:12 +0000 (0:00:00.860) 0:06:38.178 ****** 2026-01-03 00:33:27.617321 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:27.617340 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:27.617358 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:27.617376 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:27.617392 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:27.617408 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:27.617425 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:27.617443 | orchestrator | 2026-01-03 00:33:27.617462 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-01-03 00:33:27.617482 | orchestrator | Saturday 03 January 2026 00:33:13 +0000 (0:00:00.883) 0:06:39.061 ****** 2026-01-03 00:33:27.617501 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:27.617544 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:27.617557 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:27.617568 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:27.617579 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:27.617590 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:27.617600 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:27.617611 | orchestrator | 2026-01-03 00:33:27.617622 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-01-03 00:33:27.617648 | orchestrator | Saturday 03 January 2026 00:33:14 +0000 (0:00:00.854) 0:06:39.916 ****** 2026-01-03 00:33:27.617659 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:27.617669 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:27.617680 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:27.617691 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:27.617701 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:27.617712 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:27.617723 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:27.617734 | orchestrator | 2026-01-03 00:33:27.617745 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-01-03 00:33:27.617782 | orchestrator | Saturday 03 January 2026 00:33:16 +0000 (0:00:01.588) 0:06:41.505 ****** 2026-01-03 00:33:27.617793 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:33:27.617804 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:33:27.617815 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:33:27.617826 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:33:27.617836 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:33:27.617847 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:33:27.617858 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:33:27.617868 | orchestrator | 2026-01-03 00:33:27.617900 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-01-03 00:33:27.617930 | orchestrator | Saturday 03 January 2026 00:33:17 +0000 (0:00:01.398) 0:06:42.904 ****** 2026-01-03 00:33:27.617953 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:27.617980 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:27.617998 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:27.618015 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:27.618096 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:27.618108 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:27.618119 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:27.618130 | orchestrator | 2026-01-03 00:33:27.618140 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-01-03 00:33:27.618151 | orchestrator | Saturday 03 January 2026 00:33:19 +0000 (0:00:01.325) 0:06:44.230 ****** 2026-01-03 00:33:27.618162 | orchestrator | changed: [testbed-manager] 2026-01-03 00:33:27.618173 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:27.618194 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:27.618276 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:27.618291 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:27.618301 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:27.618312 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:27.618323 | orchestrator | 2026-01-03 00:33:27.618334 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-01-03 00:33:27.618345 | orchestrator | Saturday 03 January 2026 00:33:20 +0000 (0:00:01.388) 0:06:45.618 ****** 2026-01-03 00:33:27.618357 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:33:27.618369 | orchestrator | 2026-01-03 00:33:27.618380 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-01-03 00:33:27.618390 | orchestrator | Saturday 03 January 2026 00:33:21 +0000 (0:00:01.027) 0:06:46.645 ****** 2026-01-03 00:33:27.618401 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:27.618412 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:33:27.618423 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:33:27.618433 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:33:27.618444 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:33:27.618455 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:33:27.618465 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:33:27.618476 | orchestrator | 2026-01-03 00:33:27.618487 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-01-03 00:33:27.618498 | orchestrator | Saturday 03 January 2026 00:33:22 +0000 (0:00:01.382) 0:06:48.028 ****** 2026-01-03 00:33:27.618526 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:27.618537 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:33:27.618548 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:33:27.618558 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:33:27.618569 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:33:27.618580 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:33:27.618590 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:33:27.618601 | orchestrator | 2026-01-03 00:33:27.618612 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-01-03 00:33:27.618622 | orchestrator | Saturday 03 January 2026 00:33:23 +0000 (0:00:01.124) 0:06:49.153 ****** 2026-01-03 00:33:27.618633 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:27.618644 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:33:27.618655 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:33:27.618665 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:33:27.618676 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:33:27.618686 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:33:27.618697 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:33:27.618707 | orchestrator | 2026-01-03 00:33:27.618718 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-01-03 00:33:27.618729 | orchestrator | Saturday 03 January 2026 00:33:25 +0000 (0:00:01.128) 0:06:50.281 ****** 2026-01-03 00:33:27.618740 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:27.618751 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:33:27.618761 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:33:27.618772 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:33:27.618782 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:33:27.618793 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:33:27.618803 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:33:27.618814 | orchestrator | 2026-01-03 00:33:27.618824 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-01-03 00:33:27.618835 | orchestrator | Saturday 03 January 2026 00:33:26 +0000 (0:00:01.298) 0:06:51.580 ****** 2026-01-03 00:33:27.618846 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:33:27.618858 | orchestrator | 2026-01-03 00:33:27.618868 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-03 00:33:27.618879 | orchestrator | Saturday 03 January 2026 00:33:27 +0000 (0:00:00.924) 0:06:52.505 ****** 2026-01-03 00:33:27.618890 | orchestrator | 2026-01-03 00:33:27.618901 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-03 00:33:27.618920 | orchestrator | Saturday 03 January 2026 00:33:27 +0000 (0:00:00.039) 0:06:52.544 ****** 2026-01-03 00:33:27.618940 | orchestrator | 2026-01-03 00:33:27.618960 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-03 00:33:27.618979 | orchestrator | Saturday 03 January 2026 00:33:27 +0000 (0:00:00.051) 0:06:52.596 ****** 2026-01-03 00:33:27.618999 | orchestrator | 2026-01-03 00:33:27.619020 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-03 00:33:27.619055 | orchestrator | Saturday 03 January 2026 00:33:27 +0000 (0:00:00.045) 0:06:52.641 ****** 2026-01-03 00:33:53.553795 | orchestrator | 2026-01-03 00:33:53.553897 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-03 00:33:53.553908 | orchestrator | Saturday 03 January 2026 00:33:27 +0000 (0:00:00.038) 0:06:52.679 ****** 2026-01-03 00:33:53.553915 | orchestrator | 2026-01-03 00:33:53.553922 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-03 00:33:53.553929 | orchestrator | Saturday 03 January 2026 00:33:27 +0000 (0:00:00.037) 0:06:52.717 ****** 2026-01-03 00:33:53.553953 | orchestrator | 2026-01-03 00:33:53.554072 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-03 00:33:53.554088 | orchestrator | Saturday 03 January 2026 00:33:27 +0000 (0:00:00.045) 0:06:52.762 ****** 2026-01-03 00:33:53.554099 | orchestrator | 2026-01-03 00:33:53.554136 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-03 00:33:53.554148 | orchestrator | Saturday 03 January 2026 00:33:27 +0000 (0:00:00.038) 0:06:52.800 ****** 2026-01-03 00:33:53.554158 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:33:53.554170 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:33:53.554223 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:33:53.554232 | orchestrator | 2026-01-03 00:33:53.554242 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-01-03 00:33:53.554252 | orchestrator | Saturday 03 January 2026 00:33:28 +0000 (0:00:01.242) 0:06:54.042 ****** 2026-01-03 00:33:53.554262 | orchestrator | changed: [testbed-manager] 2026-01-03 00:33:53.554274 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:53.554284 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:53.554293 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:53.554302 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:53.554312 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:53.554322 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:53.554332 | orchestrator | 2026-01-03 00:33:53.554342 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-01-03 00:33:53.554353 | orchestrator | Saturday 03 January 2026 00:33:30 +0000 (0:00:01.366) 0:06:55.409 ****** 2026-01-03 00:33:53.554364 | orchestrator | changed: [testbed-manager] 2026-01-03 00:33:53.554375 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:53.554386 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:53.554396 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:53.554408 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:53.554419 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:53.554429 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:53.554441 | orchestrator | 2026-01-03 00:33:53.554451 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-01-03 00:33:53.554462 | orchestrator | Saturday 03 January 2026 00:33:31 +0000 (0:00:01.419) 0:06:56.828 ****** 2026-01-03 00:33:53.554472 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:33:53.554484 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:53.554498 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:53.554509 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:53.554519 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:53.554529 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:53.554540 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:53.554549 | orchestrator | 2026-01-03 00:33:53.554559 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-01-03 00:33:53.554570 | orchestrator | Saturday 03 January 2026 00:33:33 +0000 (0:00:02.263) 0:06:59.092 ****** 2026-01-03 00:33:53.554581 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:33:53.554591 | orchestrator | 2026-01-03 00:33:53.554601 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-01-03 00:33:53.554612 | orchestrator | Saturday 03 January 2026 00:33:34 +0000 (0:00:00.117) 0:06:59.210 ****** 2026-01-03 00:33:53.554622 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:53.554634 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:33:53.554645 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:53.554657 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:53.554669 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:53.554680 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:53.554691 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:53.554701 | orchestrator | 2026-01-03 00:33:53.554713 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-01-03 00:33:53.554725 | orchestrator | Saturday 03 January 2026 00:33:35 +0000 (0:00:01.019) 0:07:00.230 ****** 2026-01-03 00:33:53.554735 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:33:53.554746 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:33:53.554756 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:33:53.554766 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:33:53.554792 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:33:53.554802 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:33:53.554812 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:33:53.554822 | orchestrator | 2026-01-03 00:33:53.554832 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-01-03 00:33:53.554842 | orchestrator | Saturday 03 January 2026 00:33:35 +0000 (0:00:00.538) 0:07:00.768 ****** 2026-01-03 00:33:53.554868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:33:53.554883 | orchestrator | 2026-01-03 00:33:53.554894 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-01-03 00:33:53.554905 | orchestrator | Saturday 03 January 2026 00:33:36 +0000 (0:00:01.018) 0:07:01.786 ****** 2026-01-03 00:33:53.554916 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:53.554926 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:33:53.554936 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:33:53.554946 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:33:53.554957 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:33:53.554967 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:33:53.554977 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:33:53.554987 | orchestrator | 2026-01-03 00:33:53.554998 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-01-03 00:33:53.555009 | orchestrator | Saturday 03 January 2026 00:33:37 +0000 (0:00:00.840) 0:07:02.626 ****** 2026-01-03 00:33:53.555019 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-01-03 00:33:53.555055 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-01-03 00:33:53.555067 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-01-03 00:33:53.555078 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-01-03 00:33:53.555088 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-01-03 00:33:53.555098 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-01-03 00:33:53.555108 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-01-03 00:33:53.555118 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-01-03 00:33:53.555129 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-01-03 00:33:53.555138 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-01-03 00:33:53.555148 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-01-03 00:33:53.555158 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-01-03 00:33:53.555169 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-01-03 00:33:53.555206 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-01-03 00:33:53.555217 | orchestrator | 2026-01-03 00:33:53.555228 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-01-03 00:33:53.555238 | orchestrator | Saturday 03 January 2026 00:33:39 +0000 (0:00:02.475) 0:07:05.101 ****** 2026-01-03 00:33:53.555249 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:33:53.555259 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:33:53.555270 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:33:53.555281 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:33:53.555291 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:33:53.555301 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:33:53.555312 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:33:53.555322 | orchestrator | 2026-01-03 00:33:53.555331 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-01-03 00:33:53.555342 | orchestrator | Saturday 03 January 2026 00:33:40 +0000 (0:00:00.700) 0:07:05.802 ****** 2026-01-03 00:33:53.555354 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:33:53.555378 | orchestrator | 2026-01-03 00:33:53.555389 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-01-03 00:33:53.555400 | orchestrator | Saturday 03 January 2026 00:33:41 +0000 (0:00:00.823) 0:07:06.625 ****** 2026-01-03 00:33:53.555410 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:53.555420 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:33:53.555429 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:33:53.555439 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:33:53.555449 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:33:53.555458 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:33:53.555468 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:33:53.555478 | orchestrator | 2026-01-03 00:33:53.555487 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-01-03 00:33:53.555497 | orchestrator | Saturday 03 January 2026 00:33:42 +0000 (0:00:00.930) 0:07:07.556 ****** 2026-01-03 00:33:53.555507 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:53.555518 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:33:53.555528 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:33:53.555539 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:33:53.555550 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:33:53.555561 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:33:53.555572 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:33:53.555583 | orchestrator | 2026-01-03 00:33:53.555594 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-01-03 00:33:53.555604 | orchestrator | Saturday 03 January 2026 00:33:43 +0000 (0:00:00.986) 0:07:08.542 ****** 2026-01-03 00:33:53.555615 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:33:53.555626 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:33:53.555637 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:33:53.555648 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:33:53.555659 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:33:53.555671 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:33:53.555682 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:33:53.555692 | orchestrator | 2026-01-03 00:33:53.555704 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-01-03 00:33:53.555715 | orchestrator | Saturday 03 January 2026 00:33:43 +0000 (0:00:00.502) 0:07:09.044 ****** 2026-01-03 00:33:53.555726 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:53.555738 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:33:53.555749 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:33:53.555760 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:33:53.555770 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:33:53.555781 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:33:53.555791 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:33:53.555801 | orchestrator | 2026-01-03 00:33:53.555820 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-01-03 00:33:53.555831 | orchestrator | Saturday 03 January 2026 00:33:45 +0000 (0:00:01.543) 0:07:10.588 ****** 2026-01-03 00:33:53.555842 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:33:53.555853 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:33:53.555864 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:33:53.555874 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:33:53.555885 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:33:53.555897 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:33:53.555908 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:33:53.555919 | orchestrator | 2026-01-03 00:33:53.555930 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-01-03 00:33:53.555941 | orchestrator | Saturday 03 January 2026 00:33:45 +0000 (0:00:00.477) 0:07:11.065 ****** 2026-01-03 00:33:53.555951 | orchestrator | ok: [testbed-manager] 2026-01-03 00:33:53.555961 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:33:53.555971 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:33:53.555983 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:33:53.556002 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:33:53.556013 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:33:53.556035 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:27.229296 | orchestrator | 2026-01-03 00:34:27.229393 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-01-03 00:34:27.229406 | orchestrator | Saturday 03 January 2026 00:33:53 +0000 (0:00:07.675) 0:07:18.740 ****** 2026-01-03 00:34:27.229416 | orchestrator | ok: [testbed-manager] 2026-01-03 00:34:27.229425 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:27.229434 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:27.229443 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:27.229451 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:27.229458 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:27.229466 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:27.229474 | orchestrator | 2026-01-03 00:34:27.229483 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-01-03 00:34:27.229491 | orchestrator | Saturday 03 January 2026 00:33:55 +0000 (0:00:01.620) 0:07:20.360 ****** 2026-01-03 00:34:27.229499 | orchestrator | ok: [testbed-manager] 2026-01-03 00:34:27.229507 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:27.229515 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:27.229522 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:27.229530 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:27.229538 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:27.229546 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:27.229553 | orchestrator | 2026-01-03 00:34:27.229561 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-01-03 00:34:27.229570 | orchestrator | Saturday 03 January 2026 00:33:56 +0000 (0:00:01.715) 0:07:22.076 ****** 2026-01-03 00:34:27.229578 | orchestrator | ok: [testbed-manager] 2026-01-03 00:34:27.229586 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:27.229594 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:27.229601 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:27.229609 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:27.229617 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:27.229625 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:27.229633 | orchestrator | 2026-01-03 00:34:27.229641 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-03 00:34:27.229649 | orchestrator | Saturday 03 January 2026 00:33:58 +0000 (0:00:01.672) 0:07:23.748 ****** 2026-01-03 00:34:27.229656 | orchestrator | ok: [testbed-manager] 2026-01-03 00:34:27.229664 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:34:27.229672 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:34:27.229680 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:34:27.229688 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:34:27.229696 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:34:27.229704 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:34:27.229712 | orchestrator | 2026-01-03 00:34:27.229719 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-03 00:34:27.229727 | orchestrator | Saturday 03 January 2026 00:33:59 +0000 (0:00:00.848) 0:07:24.596 ****** 2026-01-03 00:34:27.229735 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:34:27.229743 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:34:27.229751 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:34:27.229759 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:34:27.229767 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:34:27.229774 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:34:27.229782 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:34:27.229792 | orchestrator | 2026-01-03 00:34:27.229801 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-01-03 00:34:27.229811 | orchestrator | Saturday 03 January 2026 00:34:00 +0000 (0:00:00.942) 0:07:25.539 ****** 2026-01-03 00:34:27.229821 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:34:27.229830 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:34:27.229862 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:34:27.229871 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:34:27.229880 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:34:27.229889 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:34:27.229898 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:34:27.229908 | orchestrator | 2026-01-03 00:34:27.229917 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-01-03 00:34:27.229927 | orchestrator | Saturday 03 January 2026 00:34:00 +0000 (0:00:00.527) 0:07:26.066 ****** 2026-01-03 00:34:27.229936 | orchestrator | ok: [testbed-manager] 2026-01-03 00:34:27.229944 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:34:27.229954 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:34:27.229963 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:34:27.229972 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:34:27.229980 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:34:27.229989 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:34:27.229998 | orchestrator | 2026-01-03 00:34:27.230008 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-01-03 00:34:27.230067 | orchestrator | Saturday 03 January 2026 00:34:01 +0000 (0:00:00.509) 0:07:26.576 ****** 2026-01-03 00:34:27.230077 | orchestrator | ok: [testbed-manager] 2026-01-03 00:34:27.230087 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:34:27.230096 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:34:27.230104 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:34:27.230111 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:34:27.230119 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:34:27.230127 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:34:27.230134 | orchestrator | 2026-01-03 00:34:27.230160 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-01-03 00:34:27.230169 | orchestrator | Saturday 03 January 2026 00:34:01 +0000 (0:00:00.515) 0:07:27.091 ****** 2026-01-03 00:34:27.230177 | orchestrator | ok: [testbed-manager] 2026-01-03 00:34:27.230185 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:34:27.230192 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:34:27.230200 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:34:27.230215 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:34:27.230223 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:34:27.230231 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:34:27.230239 | orchestrator | 2026-01-03 00:34:27.230247 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-01-03 00:34:27.230255 | orchestrator | Saturday 03 January 2026 00:34:02 +0000 (0:00:00.711) 0:07:27.802 ****** 2026-01-03 00:34:27.230263 | orchestrator | ok: [testbed-manager] 2026-01-03 00:34:27.230270 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:34:27.230278 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:34:27.230286 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:34:27.230294 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:34:27.230301 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:34:27.230309 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:34:27.230317 | orchestrator | 2026-01-03 00:34:27.230339 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-01-03 00:34:27.230348 | orchestrator | Saturday 03 January 2026 00:34:08 +0000 (0:00:05.554) 0:07:33.357 ****** 2026-01-03 00:34:27.230356 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:34:27.230363 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:34:27.230371 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:34:27.230379 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:34:27.230387 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:34:27.230395 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:34:27.230403 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:34:27.230411 | orchestrator | 2026-01-03 00:34:27.230433 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-01-03 00:34:27.230442 | orchestrator | Saturday 03 January 2026 00:34:08 +0000 (0:00:00.498) 0:07:33.856 ****** 2026-01-03 00:34:27.230451 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:34:27.230468 | orchestrator | 2026-01-03 00:34:27.230476 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-01-03 00:34:27.230484 | orchestrator | Saturday 03 January 2026 00:34:09 +0000 (0:00:00.952) 0:07:34.808 ****** 2026-01-03 00:34:27.230492 | orchestrator | ok: [testbed-manager] 2026-01-03 00:34:27.230500 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:34:27.230508 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:34:27.230515 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:34:27.230523 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:34:27.230531 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:34:27.230538 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:34:27.230546 | orchestrator | 2026-01-03 00:34:27.230554 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-01-03 00:34:27.230562 | orchestrator | Saturday 03 January 2026 00:34:11 +0000 (0:00:01.897) 0:07:36.705 ****** 2026-01-03 00:34:27.230570 | orchestrator | ok: [testbed-manager] 2026-01-03 00:34:27.230578 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:34:27.230585 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:34:27.230593 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:34:27.230601 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:34:27.230608 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:34:27.230616 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:34:27.230624 | orchestrator | 2026-01-03 00:34:27.230632 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-01-03 00:34:27.230640 | orchestrator | Saturday 03 January 2026 00:34:13 +0000 (0:00:02.187) 0:07:38.893 ****** 2026-01-03 00:34:27.230648 | orchestrator | ok: [testbed-manager] 2026-01-03 00:34:27.230655 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:34:27.230663 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:34:27.230671 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:34:27.230679 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:34:27.230686 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:34:27.230694 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:34:27.230702 | orchestrator | 2026-01-03 00:34:27.230710 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-01-03 00:34:27.230718 | orchestrator | Saturday 03 January 2026 00:34:14 +0000 (0:00:00.897) 0:07:39.791 ****** 2026-01-03 00:34:27.230726 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-03 00:34:27.230736 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-03 00:34:27.230744 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-03 00:34:27.230752 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-03 00:34:27.230760 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-03 00:34:27.230768 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-03 00:34:27.230775 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-03 00:34:27.230783 | orchestrator | 2026-01-03 00:34:27.230791 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-01-03 00:34:27.230803 | orchestrator | Saturday 03 January 2026 00:34:16 +0000 (0:00:01.833) 0:07:41.624 ****** 2026-01-03 00:34:27.230811 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:34:27.230825 | orchestrator | 2026-01-03 00:34:27.230833 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-01-03 00:34:27.230841 | orchestrator | Saturday 03 January 2026 00:34:17 +0000 (0:00:00.770) 0:07:42.395 ****** 2026-01-03 00:34:27.230849 | orchestrator | changed: [testbed-manager] 2026-01-03 00:34:27.230857 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:27.230865 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:27.230873 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:27.230881 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:27.230889 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:27.230896 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:27.230904 | orchestrator | 2026-01-03 00:34:27.230917 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-01-03 00:34:57.759469 | orchestrator | Saturday 03 January 2026 00:34:27 +0000 (0:00:10.025) 0:07:52.421 ****** 2026-01-03 00:34:57.759568 | orchestrator | ok: [testbed-manager] 2026-01-03 00:34:57.759580 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:34:57.759588 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:34:57.759594 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:34:57.759601 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:34:57.759607 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:34:57.759614 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:34:57.759621 | orchestrator | 2026-01-03 00:34:57.759629 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-01-03 00:34:57.759636 | orchestrator | Saturday 03 January 2026 00:34:29 +0000 (0:00:01.879) 0:07:54.300 ****** 2026-01-03 00:34:57.759643 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:34:57.759651 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:34:57.759658 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:34:57.759665 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:34:57.759672 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:34:57.759679 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:34:57.759686 | orchestrator | 2026-01-03 00:34:57.759693 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-01-03 00:34:57.759700 | orchestrator | Saturday 03 January 2026 00:34:30 +0000 (0:00:01.301) 0:07:55.601 ****** 2026-01-03 00:34:57.759708 | orchestrator | changed: [testbed-manager] 2026-01-03 00:34:57.759717 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:57.759724 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:57.759731 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:57.759738 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:57.759745 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:57.759752 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:57.759759 | orchestrator | 2026-01-03 00:34:57.759766 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-01-03 00:34:57.759773 | orchestrator | 2026-01-03 00:34:57.759780 | orchestrator | TASK [Include hardening role] ************************************************** 2026-01-03 00:34:57.759787 | orchestrator | Saturday 03 January 2026 00:34:31 +0000 (0:00:01.224) 0:07:56.826 ****** 2026-01-03 00:34:57.759794 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:34:57.759801 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:34:57.759808 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:34:57.759815 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:34:57.759821 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:34:57.759828 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:34:57.759834 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:34:57.759841 | orchestrator | 2026-01-03 00:34:57.759848 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-01-03 00:34:57.759855 | orchestrator | 2026-01-03 00:34:57.759861 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-01-03 00:34:57.759868 | orchestrator | Saturday 03 January 2026 00:34:32 +0000 (0:00:00.674) 0:07:57.501 ****** 2026-01-03 00:34:57.759901 | orchestrator | changed: [testbed-manager] 2026-01-03 00:34:57.759909 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:57.759916 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:57.759922 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:57.759929 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:57.759935 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:57.759941 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:57.759947 | orchestrator | 2026-01-03 00:34:57.759955 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-01-03 00:34:57.759961 | orchestrator | Saturday 03 January 2026 00:34:33 +0000 (0:00:01.366) 0:07:58.867 ****** 2026-01-03 00:34:57.759968 | orchestrator | ok: [testbed-manager] 2026-01-03 00:34:57.759974 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:34:57.759980 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:34:57.759986 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:34:57.759993 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:34:57.759999 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:34:57.760005 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:34:57.760012 | orchestrator | 2026-01-03 00:34:57.760018 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-01-03 00:34:57.760024 | orchestrator | Saturday 03 January 2026 00:34:35 +0000 (0:00:01.406) 0:08:00.274 ****** 2026-01-03 00:34:57.760031 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:34:57.760037 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:34:57.760043 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:34:57.760049 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:34:57.760056 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:34:57.760062 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:34:57.760069 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:34:57.760075 | orchestrator | 2026-01-03 00:34:57.760082 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-01-03 00:34:57.760088 | orchestrator | Saturday 03 January 2026 00:34:35 +0000 (0:00:00.488) 0:08:00.763 ****** 2026-01-03 00:34:57.760095 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:34:57.760104 | orchestrator | 2026-01-03 00:34:57.760151 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-01-03 00:34:57.760159 | orchestrator | Saturday 03 January 2026 00:34:36 +0000 (0:00:00.960) 0:08:01.724 ****** 2026-01-03 00:34:57.760167 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:34:57.760174 | orchestrator | 2026-01-03 00:34:57.760180 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-01-03 00:34:57.760186 | orchestrator | Saturday 03 January 2026 00:34:37 +0000 (0:00:00.796) 0:08:02.521 ****** 2026-01-03 00:34:57.760192 | orchestrator | changed: [testbed-manager] 2026-01-03 00:34:57.760199 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:57.760204 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:57.760211 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:57.760217 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:57.760223 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:57.760230 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:57.760236 | orchestrator | 2026-01-03 00:34:57.760259 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-01-03 00:34:57.760266 | orchestrator | Saturday 03 January 2026 00:34:46 +0000 (0:00:09.027) 0:08:11.548 ****** 2026-01-03 00:34:57.760272 | orchestrator | changed: [testbed-manager] 2026-01-03 00:34:57.760278 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:57.760284 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:57.760290 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:57.760306 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:57.760313 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:57.760320 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:57.760327 | orchestrator | 2026-01-03 00:34:57.760334 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-01-03 00:34:57.760340 | orchestrator | Saturday 03 January 2026 00:34:47 +0000 (0:00:01.030) 0:08:12.579 ****** 2026-01-03 00:34:57.760347 | orchestrator | changed: [testbed-manager] 2026-01-03 00:34:57.760353 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:57.760360 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:57.760367 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:57.760373 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:57.760380 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:57.760387 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:57.760393 | orchestrator | 2026-01-03 00:34:57.760400 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-01-03 00:34:57.760407 | orchestrator | Saturday 03 January 2026 00:34:48 +0000 (0:00:01.346) 0:08:13.925 ****** 2026-01-03 00:34:57.760414 | orchestrator | changed: [testbed-manager] 2026-01-03 00:34:57.760421 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:57.760428 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:57.760434 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:57.760441 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:57.760448 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:57.760455 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:57.760462 | orchestrator | 2026-01-03 00:34:57.760469 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-01-03 00:34:57.760476 | orchestrator | Saturday 03 January 2026 00:34:50 +0000 (0:00:01.924) 0:08:15.849 ****** 2026-01-03 00:34:57.760482 | orchestrator | changed: [testbed-manager] 2026-01-03 00:34:57.760489 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:57.760496 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:57.760503 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:57.760508 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:57.760514 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:57.760520 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:57.760526 | orchestrator | 2026-01-03 00:34:57.760532 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-01-03 00:34:57.760538 | orchestrator | Saturday 03 January 2026 00:34:51 +0000 (0:00:01.274) 0:08:17.124 ****** 2026-01-03 00:34:57.760545 | orchestrator | changed: [testbed-manager] 2026-01-03 00:34:57.760552 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:57.760559 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:57.760566 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:57.760573 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:57.760580 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:57.760587 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:57.760594 | orchestrator | 2026-01-03 00:34:57.760601 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-01-03 00:34:57.760608 | orchestrator | 2026-01-03 00:34:57.760615 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-01-03 00:34:57.760622 | orchestrator | Saturday 03 January 2026 00:34:53 +0000 (0:00:01.120) 0:08:18.245 ****** 2026-01-03 00:34:57.760629 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:34:57.760637 | orchestrator | 2026-01-03 00:34:57.760644 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-03 00:34:57.760650 | orchestrator | Saturday 03 January 2026 00:34:53 +0000 (0:00:00.763) 0:08:19.009 ****** 2026-01-03 00:34:57.760658 | orchestrator | ok: [testbed-manager] 2026-01-03 00:34:57.760665 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:34:57.760672 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:34:57.760679 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:34:57.760692 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:34:57.760699 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:34:57.760706 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:34:57.760713 | orchestrator | 2026-01-03 00:34:57.760719 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-03 00:34:57.760727 | orchestrator | Saturday 03 January 2026 00:34:54 +0000 (0:00:01.009) 0:08:20.018 ****** 2026-01-03 00:34:57.760734 | orchestrator | changed: [testbed-manager] 2026-01-03 00:34:57.760741 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:57.760748 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:57.760754 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:57.760761 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:57.760768 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:57.760775 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:57.760782 | orchestrator | 2026-01-03 00:34:57.760796 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-01-03 00:34:57.760804 | orchestrator | Saturday 03 January 2026 00:34:55 +0000 (0:00:01.146) 0:08:21.165 ****** 2026-01-03 00:34:57.760810 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:34:57.760818 | orchestrator | 2026-01-03 00:34:57.760825 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-03 00:34:57.760832 | orchestrator | Saturday 03 January 2026 00:34:56 +0000 (0:00:00.967) 0:08:22.133 ****** 2026-01-03 00:34:57.760839 | orchestrator | ok: [testbed-manager] 2026-01-03 00:34:57.760846 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:34:57.760853 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:34:57.760859 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:34:57.760866 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:34:57.760873 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:34:57.760880 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:34:57.760887 | orchestrator | 2026-01-03 00:34:57.760902 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-03 00:34:59.232032 | orchestrator | Saturday 03 January 2026 00:34:57 +0000 (0:00:00.814) 0:08:22.947 ****** 2026-01-03 00:34:59.232181 | orchestrator | changed: [testbed-manager] 2026-01-03 00:34:59.232201 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:34:59.232213 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:34:59.232225 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:34:59.232236 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:34:59.232247 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:34:59.232258 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:34:59.232270 | orchestrator | 2026-01-03 00:34:59.232282 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:34:59.232294 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-01-03 00:34:59.232307 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-03 00:34:59.232318 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-03 00:34:59.232329 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-03 00:34:59.232339 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-01-03 00:34:59.232350 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-03 00:34:59.232361 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-03 00:34:59.232399 | orchestrator | 2026-01-03 00:34:59.232410 | orchestrator | 2026-01-03 00:34:59.232421 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:34:59.232433 | orchestrator | Saturday 03 January 2026 00:34:58 +0000 (0:00:01.070) 0:08:24.018 ****** 2026-01-03 00:34:59.232444 | orchestrator | =============================================================================== 2026-01-03 00:34:59.232454 | orchestrator | osism.commons.packages : Install required packages --------------------- 80.91s 2026-01-03 00:34:59.232465 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.03s 2026-01-03 00:34:59.232487 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 32.90s 2026-01-03 00:34:59.232499 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.09s 2026-01-03 00:34:59.232509 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.70s 2026-01-03 00:34:59.232522 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.86s 2026-01-03 00:34:59.232532 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.49s 2026-01-03 00:34:59.232543 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.51s 2026-01-03 00:34:59.232554 | orchestrator | osism.services.rng : Install rng package ------------------------------- 10.18s 2026-01-03 00:34:59.232564 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.03s 2026-01-03 00:34:59.232577 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.97s 2026-01-03 00:34:59.232590 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.03s 2026-01-03 00:34:59.232602 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.96s 2026-01-03 00:34:59.232615 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.34s 2026-01-03 00:34:59.232627 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.13s 2026-01-03 00:34:59.232655 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.68s 2026-01-03 00:34:59.232678 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.37s 2026-01-03 00:34:59.232691 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.88s 2026-01-03 00:34:59.232719 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.11s 2026-01-03 00:34:59.232733 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.55s 2026-01-03 00:34:59.509860 | orchestrator | + osism apply fail2ban 2026-01-03 00:35:12.186987 | orchestrator | 2026-01-03 00:35:12 | INFO  | Task b65be28d-afb6-4686-b459-2ceada4bb476 (fail2ban) was prepared for execution. 2026-01-03 00:35:12.187152 | orchestrator | 2026-01-03 00:35:12 | INFO  | It takes a moment until task b65be28d-afb6-4686-b459-2ceada4bb476 (fail2ban) has been started and output is visible here. 2026-01-03 00:35:33.946526 | orchestrator | 2026-01-03 00:35:33.946774 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-01-03 00:35:33.946824 | orchestrator | 2026-01-03 00:35:33.946863 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-01-03 00:35:33.946899 | orchestrator | Saturday 03 January 2026 00:35:16 +0000 (0:00:00.258) 0:00:00.258 ****** 2026-01-03 00:35:33.946936 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:35:33.946975 | orchestrator | 2026-01-03 00:35:33.946994 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-01-03 00:35:33.947011 | orchestrator | Saturday 03 January 2026 00:35:17 +0000 (0:00:01.106) 0:00:01.365 ****** 2026-01-03 00:35:33.947028 | orchestrator | changed: [testbed-manager] 2026-01-03 00:35:33.947125 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:35:33.947149 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:35:33.947169 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:35:33.947187 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:35:33.947206 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:35:33.947226 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:35:33.947237 | orchestrator | 2026-01-03 00:35:33.947248 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-01-03 00:35:33.947259 | orchestrator | Saturday 03 January 2026 00:35:28 +0000 (0:00:11.317) 0:00:12.683 ****** 2026-01-03 00:35:33.947270 | orchestrator | changed: [testbed-manager] 2026-01-03 00:35:33.947281 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:35:33.947292 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:35:33.947302 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:35:33.947313 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:35:33.947323 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:35:33.947334 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:35:33.947344 | orchestrator | 2026-01-03 00:35:33.947355 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-01-03 00:35:33.947366 | orchestrator | Saturday 03 January 2026 00:35:30 +0000 (0:00:01.447) 0:00:14.130 ****** 2026-01-03 00:35:33.947377 | orchestrator | ok: [testbed-manager] 2026-01-03 00:35:33.947389 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:35:33.947399 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:35:33.947410 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:35:33.947421 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:35:33.947431 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:35:33.947442 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:35:33.947452 | orchestrator | 2026-01-03 00:35:33.947463 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-01-03 00:35:33.947474 | orchestrator | Saturday 03 January 2026 00:35:31 +0000 (0:00:01.457) 0:00:15.588 ****** 2026-01-03 00:35:33.947485 | orchestrator | changed: [testbed-manager] 2026-01-03 00:35:33.947496 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:35:33.947506 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:35:33.947517 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:35:33.947528 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:35:33.947539 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:35:33.947549 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:35:33.947560 | orchestrator | 2026-01-03 00:35:33.947572 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:35:33.947583 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:35:33.947595 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:35:33.947606 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:35:33.947617 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:35:33.947627 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:35:33.947638 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:35:33.947649 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:35:33.947659 | orchestrator | 2026-01-03 00:35:33.947670 | orchestrator | 2026-01-03 00:35:33.947681 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:35:33.947703 | orchestrator | Saturday 03 January 2026 00:35:33 +0000 (0:00:01.628) 0:00:17.216 ****** 2026-01-03 00:35:33.947713 | orchestrator | =============================================================================== 2026-01-03 00:35:33.947724 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.32s 2026-01-03 00:35:33.947753 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.63s 2026-01-03 00:35:33.947764 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.46s 2026-01-03 00:35:33.947775 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.45s 2026-01-03 00:35:33.947786 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.11s 2026-01-03 00:35:34.214418 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-03 00:35:34.214514 | orchestrator | + osism apply network 2026-01-03 00:35:46.286576 | orchestrator | 2026-01-03 00:35:46 | INFO  | Task 14ace86b-18fb-42cf-9aa8-572cdd77391a (network) was prepared for execution. 2026-01-03 00:35:46.286677 | orchestrator | 2026-01-03 00:35:46 | INFO  | It takes a moment until task 14ace86b-18fb-42cf-9aa8-572cdd77391a (network) has been started and output is visible here. 2026-01-03 00:36:14.320267 | orchestrator | 2026-01-03 00:36:14.320381 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-01-03 00:36:14.320399 | orchestrator | 2026-01-03 00:36:14.320411 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-01-03 00:36:14.320423 | orchestrator | Saturday 03 January 2026 00:35:50 +0000 (0:00:00.186) 0:00:00.186 ****** 2026-01-03 00:36:14.320434 | orchestrator | ok: [testbed-manager] 2026-01-03 00:36:14.320446 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:36:14.320457 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:36:14.320468 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:36:14.320479 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:36:14.320490 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:36:14.320500 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:36:14.320511 | orchestrator | 2026-01-03 00:36:14.320522 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-01-03 00:36:14.320533 | orchestrator | Saturday 03 January 2026 00:35:50 +0000 (0:00:00.583) 0:00:00.769 ****** 2026-01-03 00:36:14.320544 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:36:14.320557 | orchestrator | 2026-01-03 00:36:14.320568 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-01-03 00:36:14.320579 | orchestrator | Saturday 03 January 2026 00:35:51 +0000 (0:00:00.999) 0:00:01.769 ****** 2026-01-03 00:36:14.320590 | orchestrator | ok: [testbed-manager] 2026-01-03 00:36:14.320601 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:36:14.320611 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:36:14.320622 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:36:14.320632 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:36:14.320643 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:36:14.320653 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:36:14.320664 | orchestrator | 2026-01-03 00:36:14.320675 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-01-03 00:36:14.320685 | orchestrator | Saturday 03 January 2026 00:35:53 +0000 (0:00:02.015) 0:00:03.785 ****** 2026-01-03 00:36:14.320696 | orchestrator | ok: [testbed-manager] 2026-01-03 00:36:14.320707 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:36:14.320718 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:36:14.320728 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:36:14.320739 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:36:14.320749 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:36:14.320760 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:36:14.320770 | orchestrator | 2026-01-03 00:36:14.320783 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-01-03 00:36:14.320823 | orchestrator | Saturday 03 January 2026 00:35:55 +0000 (0:00:01.895) 0:00:05.680 ****** 2026-01-03 00:36:14.320837 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-01-03 00:36:14.320851 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-01-03 00:36:14.320864 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-01-03 00:36:14.320877 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-01-03 00:36:14.320890 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-01-03 00:36:14.320902 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-01-03 00:36:14.320914 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-01-03 00:36:14.320927 | orchestrator | 2026-01-03 00:36:14.320940 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-01-03 00:36:14.320953 | orchestrator | Saturday 03 January 2026 00:35:56 +0000 (0:00:01.092) 0:00:06.773 ****** 2026-01-03 00:36:14.320966 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-03 00:36:14.320979 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-03 00:36:14.320991 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-03 00:36:14.321004 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-03 00:36:14.321016 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-03 00:36:14.321028 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-03 00:36:14.321058 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-03 00:36:14.321072 | orchestrator | 2026-01-03 00:36:14.321084 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-01-03 00:36:14.321095 | orchestrator | Saturday 03 January 2026 00:36:00 +0000 (0:00:03.265) 0:00:10.039 ****** 2026-01-03 00:36:14.321106 | orchestrator | changed: [testbed-manager] 2026-01-03 00:36:14.321116 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:36:14.321127 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:36:14.321138 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:36:14.321148 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:36:14.321159 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:36:14.321169 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:36:14.321180 | orchestrator | 2026-01-03 00:36:14.321191 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-01-03 00:36:14.321202 | orchestrator | Saturday 03 January 2026 00:36:01 +0000 (0:00:01.580) 0:00:11.619 ****** 2026-01-03 00:36:14.321212 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-03 00:36:14.321223 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-03 00:36:14.321234 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-03 00:36:14.321244 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-03 00:36:14.321255 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-03 00:36:14.321266 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-03 00:36:14.321276 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-03 00:36:14.321287 | orchestrator | 2026-01-03 00:36:14.321298 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-01-03 00:36:14.321309 | orchestrator | Saturday 03 January 2026 00:36:03 +0000 (0:00:01.652) 0:00:13.272 ****** 2026-01-03 00:36:14.321319 | orchestrator | ok: [testbed-manager] 2026-01-03 00:36:14.321330 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:36:14.321359 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:36:14.321370 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:36:14.321381 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:36:14.321392 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:36:14.321402 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:36:14.321413 | orchestrator | 2026-01-03 00:36:14.321424 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-01-03 00:36:14.321452 | orchestrator | Saturday 03 January 2026 00:36:04 +0000 (0:00:01.147) 0:00:14.419 ****** 2026-01-03 00:36:14.321463 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:36:14.321474 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:36:14.321485 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:36:14.321495 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:36:14.321514 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:36:14.321525 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:36:14.321536 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:36:14.321546 | orchestrator | 2026-01-03 00:36:14.321557 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-01-03 00:36:14.321568 | orchestrator | Saturday 03 January 2026 00:36:05 +0000 (0:00:00.638) 0:00:15.058 ****** 2026-01-03 00:36:14.321579 | orchestrator | ok: [testbed-manager] 2026-01-03 00:36:14.321589 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:36:14.321600 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:36:14.321610 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:36:14.321621 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:36:14.321632 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:36:14.321642 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:36:14.321652 | orchestrator | 2026-01-03 00:36:14.321663 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-01-03 00:36:14.321674 | orchestrator | Saturday 03 January 2026 00:36:07 +0000 (0:00:02.323) 0:00:17.381 ****** 2026-01-03 00:36:14.321685 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:36:14.321695 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:36:14.321706 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:36:14.321716 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:36:14.321727 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:36:14.321738 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:36:14.321749 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-01-03 00:36:14.321761 | orchestrator | 2026-01-03 00:36:14.321772 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-01-03 00:36:14.321782 | orchestrator | Saturday 03 January 2026 00:36:08 +0000 (0:00:00.873) 0:00:18.255 ****** 2026-01-03 00:36:14.321793 | orchestrator | ok: [testbed-manager] 2026-01-03 00:36:14.321803 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:36:14.321814 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:36:14.321824 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:36:14.321835 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:36:14.321845 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:36:14.321856 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:36:14.321866 | orchestrator | 2026-01-03 00:36:14.321877 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-01-03 00:36:14.321888 | orchestrator | Saturday 03 January 2026 00:36:09 +0000 (0:00:01.622) 0:00:19.877 ****** 2026-01-03 00:36:14.321899 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:36:14.321911 | orchestrator | 2026-01-03 00:36:14.321922 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-03 00:36:14.321933 | orchestrator | Saturday 03 January 2026 00:36:11 +0000 (0:00:01.204) 0:00:21.081 ****** 2026-01-03 00:36:14.321943 | orchestrator | ok: [testbed-manager] 2026-01-03 00:36:14.321954 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:36:14.321965 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:36:14.321975 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:36:14.321986 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:36:14.321996 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:36:14.322007 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:36:14.322082 | orchestrator | 2026-01-03 00:36:14.322096 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-01-03 00:36:14.322107 | orchestrator | Saturday 03 January 2026 00:36:12 +0000 (0:00:01.157) 0:00:22.239 ****** 2026-01-03 00:36:14.322118 | orchestrator | ok: [testbed-manager] 2026-01-03 00:36:14.322128 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:36:14.322139 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:36:14.322149 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:36:14.322167 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:36:14.322177 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:36:14.322188 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:36:14.322199 | orchestrator | 2026-01-03 00:36:14.322209 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-03 00:36:14.322220 | orchestrator | Saturday 03 January 2026 00:36:12 +0000 (0:00:00.666) 0:00:22.905 ****** 2026-01-03 00:36:14.322230 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-01-03 00:36:14.322241 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-01-03 00:36:14.322252 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-01-03 00:36:14.322262 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-01-03 00:36:14.322273 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-03 00:36:14.322283 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-01-03 00:36:14.322300 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-03 00:36:14.322311 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-01-03 00:36:14.322321 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-03 00:36:14.322332 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-03 00:36:14.322342 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-03 00:36:14.322353 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-01-03 00:36:14.322363 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-03 00:36:14.322374 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-03 00:36:14.322385 | orchestrator | 2026-01-03 00:36:14.322404 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-01-03 00:36:30.663540 | orchestrator | Saturday 03 January 2026 00:36:14 +0000 (0:00:01.355) 0:00:24.261 ****** 2026-01-03 00:36:30.663653 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:36:30.663669 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:36:30.663681 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:36:30.663692 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:36:30.663703 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:36:30.663714 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:36:30.663725 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:36:30.663737 | orchestrator | 2026-01-03 00:36:30.663749 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-01-03 00:36:30.663761 | orchestrator | Saturday 03 January 2026 00:36:14 +0000 (0:00:00.636) 0:00:24.898 ****** 2026-01-03 00:36:30.663774 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-2, testbed-manager, testbed-node-1, testbed-node-3, testbed-node-0, testbed-node-4, testbed-node-5 2026-01-03 00:36:30.663787 | orchestrator | 2026-01-03 00:36:30.663798 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-01-03 00:36:30.663809 | orchestrator | Saturday 03 January 2026 00:36:19 +0000 (0:00:04.676) 0:00:29.574 ****** 2026-01-03 00:36:30.663822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:36:30.663834 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:36:30.663860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:36:30.663899 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:36:30.663912 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:36:30.663923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:36:30.663934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:36:30.663945 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:36:30.663956 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:36:30.663977 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:36:30.663989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:36:30.664067 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:36:30.664083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:36:30.664096 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:36:30.664109 | orchestrator | 2026-01-03 00:36:30.664123 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-01-03 00:36:30.664137 | orchestrator | Saturday 03 January 2026 00:36:25 +0000 (0:00:05.762) 0:00:35.337 ****** 2026-01-03 00:36:30.664150 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:36:30.664171 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:36:30.664184 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:36:30.664198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:36:30.664211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:36:30.664224 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:36:30.664237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:36:30.664249 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-03 00:36:30.664263 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:36:30.664282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:36:30.664296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:36:30.664309 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:36:30.664334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:36:44.435096 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-03 00:36:44.435245 | orchestrator | 2026-01-03 00:36:44.435275 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-01-03 00:36:44.435296 | orchestrator | Saturday 03 January 2026 00:36:30 +0000 (0:00:05.261) 0:00:40.598 ****** 2026-01-03 00:36:44.435352 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:36:44.435374 | orchestrator | 2026-01-03 00:36:44.435394 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-03 00:36:44.435413 | orchestrator | Saturday 03 January 2026 00:36:31 +0000 (0:00:01.110) 0:00:41.709 ****** 2026-01-03 00:36:44.435432 | orchestrator | ok: [testbed-manager] 2026-01-03 00:36:44.435452 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:36:44.435471 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:36:44.435489 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:36:44.435507 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:36:44.435525 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:36:44.435542 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:36:44.435560 | orchestrator | 2026-01-03 00:36:44.435579 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-03 00:36:44.435597 | orchestrator | Saturday 03 January 2026 00:36:32 +0000 (0:00:01.028) 0:00:42.737 ****** 2026-01-03 00:36:44.435615 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-03 00:36:44.435634 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-03 00:36:44.435653 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-03 00:36:44.435670 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-03 00:36:44.435687 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:36:44.435705 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-03 00:36:44.435724 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-03 00:36:44.435742 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-03 00:36:44.435759 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-03 00:36:44.435777 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:36:44.435796 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-03 00:36:44.435814 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-03 00:36:44.435832 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-03 00:36:44.435850 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-03 00:36:44.435868 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:36:44.435886 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-03 00:36:44.435905 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-03 00:36:44.435923 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-03 00:36:44.435941 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-03 00:36:44.435959 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:36:44.435978 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-03 00:36:44.435997 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-03 00:36:44.436051 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-03 00:36:44.436071 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-03 00:36:44.436090 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:36:44.436130 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-03 00:36:44.436150 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-03 00:36:44.436189 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-03 00:36:44.436208 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-03 00:36:44.436226 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:36:44.436245 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-03 00:36:44.436263 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-03 00:36:44.436280 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-03 00:36:44.436299 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-03 00:36:44.436317 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:36:44.436336 | orchestrator | 2026-01-03 00:36:44.436354 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-01-03 00:36:44.436401 | orchestrator | Saturday 03 January 2026 00:36:33 +0000 (0:00:00.817) 0:00:43.554 ****** 2026-01-03 00:36:44.436421 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:36:44.436440 | orchestrator | 2026-01-03 00:36:44.436459 | orchestrator | TASK [osism.commons.network : Install required packages for network-extra-init] *** 2026-01-03 00:36:44.436478 | orchestrator | Saturday 03 January 2026 00:36:34 +0000 (0:00:01.297) 0:00:44.852 ****** 2026-01-03 00:36:44.436496 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:36:44.436516 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:36:44.436534 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:36:44.436553 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:36:44.436571 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:36:44.436590 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:36:44.436608 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:36:44.436627 | orchestrator | 2026-01-03 00:36:44.436645 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-01-03 00:36:44.436664 | orchestrator | Saturday 03 January 2026 00:36:35 +0000 (0:00:00.658) 0:00:45.511 ****** 2026-01-03 00:36:44.436682 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:36:44.436701 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:36:44.436718 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:36:44.436738 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:36:44.436757 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:36:44.436775 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:36:44.436794 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:36:44.436813 | orchestrator | 2026-01-03 00:36:44.436831 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-01-03 00:36:44.436851 | orchestrator | Saturday 03 January 2026 00:36:36 +0000 (0:00:00.934) 0:00:46.446 ****** 2026-01-03 00:36:44.436868 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:36:44.436886 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:36:44.436903 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:36:44.436919 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:36:44.436935 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:36:44.436951 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:36:44.436968 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:36:44.436985 | orchestrator | 2026-01-03 00:36:44.437002 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-01-03 00:36:44.437054 | orchestrator | Saturday 03 January 2026 00:36:37 +0000 (0:00:00.596) 0:00:47.042 ****** 2026-01-03 00:36:44.437070 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:36:44.437087 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:36:44.437105 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:36:44.437122 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:36:44.437139 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:36:44.437178 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:36:44.437199 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:36:44.437218 | orchestrator | 2026-01-03 00:36:44.437238 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-01-03 00:36:44.437257 | orchestrator | Saturday 03 January 2026 00:36:37 +0000 (0:00:00.765) 0:00:47.808 ****** 2026-01-03 00:36:44.437277 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:36:44.437296 | orchestrator | ok: [testbed-manager] 2026-01-03 00:36:44.437316 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:36:44.437334 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:36:44.437353 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:36:44.437372 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:36:44.437391 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:36:44.437410 | orchestrator | 2026-01-03 00:36:44.437431 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-01-03 00:36:44.437450 | orchestrator | Saturday 03 January 2026 00:36:39 +0000 (0:00:01.497) 0:00:49.305 ****** 2026-01-03 00:36:44.437470 | orchestrator | ok: [testbed-manager] 2026-01-03 00:36:44.437490 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:36:44.437509 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:36:44.437528 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:36:44.437547 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:36:44.437566 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:36:44.437584 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:36:44.437603 | orchestrator | 2026-01-03 00:36:44.437621 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-01-03 00:36:44.437638 | orchestrator | Saturday 03 January 2026 00:36:40 +0000 (0:00:01.210) 0:00:50.515 ****** 2026-01-03 00:36:44.437655 | orchestrator | ok: [testbed-manager] 2026-01-03 00:36:44.437674 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:36:44.437692 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:36:44.437710 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:36:44.437727 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:36:44.437746 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:36:44.437764 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:36:44.437783 | orchestrator | 2026-01-03 00:36:44.437802 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-01-03 00:36:44.437833 | orchestrator | Saturday 03 January 2026 00:36:43 +0000 (0:00:02.442) 0:00:52.958 ****** 2026-01-03 00:36:44.437852 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:36:44.437873 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:36:44.437892 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:36:44.437911 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:36:44.437931 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:36:44.437951 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:36:44.437971 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:36:44.437990 | orchestrator | 2026-01-03 00:36:44.438112 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-01-03 00:36:44.438140 | orchestrator | Saturday 03 January 2026 00:36:43 +0000 (0:00:00.684) 0:00:53.643 ****** 2026-01-03 00:36:44.438158 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:36:44.438175 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:36:44.438193 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:36:44.438211 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:36:44.438228 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:36:44.438246 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:36:44.438264 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:36:44.438282 | orchestrator | 2026-01-03 00:36:44.438301 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:36:44.782692 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-03 00:36:44.782817 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-03 00:36:44.782876 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-03 00:36:44.782897 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-03 00:36:44.782917 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-03 00:36:44.782936 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-03 00:36:44.782955 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-03 00:36:44.782974 | orchestrator | 2026-01-03 00:36:44.782994 | orchestrator | 2026-01-03 00:36:44.783085 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:36:44.783108 | orchestrator | Saturday 03 January 2026 00:36:44 +0000 (0:00:00.736) 0:00:54.380 ****** 2026-01-03 00:36:44.783126 | orchestrator | =============================================================================== 2026-01-03 00:36:44.783143 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.76s 2026-01-03 00:36:44.783160 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.26s 2026-01-03 00:36:44.783179 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.68s 2026-01-03 00:36:44.783198 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.27s 2026-01-03 00:36:44.783218 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.44s 2026-01-03 00:36:44.783239 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.32s 2026-01-03 00:36:44.783258 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.02s 2026-01-03 00:36:44.783278 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.90s 2026-01-03 00:36:44.783297 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.65s 2026-01-03 00:36:44.783317 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.62s 2026-01-03 00:36:44.783337 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.58s 2026-01-03 00:36:44.783357 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.50s 2026-01-03 00:36:44.783377 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.36s 2026-01-03 00:36:44.783396 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.30s 2026-01-03 00:36:44.783416 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.21s 2026-01-03 00:36:44.783436 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.20s 2026-01-03 00:36:44.783455 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.16s 2026-01-03 00:36:44.783475 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.15s 2026-01-03 00:36:44.783495 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.11s 2026-01-03 00:36:44.783516 | orchestrator | osism.commons.network : Create required directories --------------------- 1.09s 2026-01-03 00:36:45.067268 | orchestrator | + osism apply wireguard 2026-01-03 00:36:57.108719 | orchestrator | 2026-01-03 00:36:57 | INFO  | Task 3bda7c9b-c3b8-436e-ad47-b5a051e94ddb (wireguard) was prepared for execution. 2026-01-03 00:36:57.108828 | orchestrator | 2026-01-03 00:36:57 | INFO  | It takes a moment until task 3bda7c9b-c3b8-436e-ad47-b5a051e94ddb (wireguard) has been started and output is visible here. 2026-01-03 00:37:16.477231 | orchestrator | 2026-01-03 00:37:16.477382 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-01-03 00:37:16.477429 | orchestrator | 2026-01-03 00:37:16.477443 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-01-03 00:37:16.477456 | orchestrator | Saturday 03 January 2026 00:37:00 +0000 (0:00:00.160) 0:00:00.160 ****** 2026-01-03 00:37:16.477554 | orchestrator | ok: [testbed-manager] 2026-01-03 00:37:16.477585 | orchestrator | 2026-01-03 00:37:16.477597 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-01-03 00:37:16.477608 | orchestrator | Saturday 03 January 2026 00:37:02 +0000 (0:00:01.186) 0:00:01.346 ****** 2026-01-03 00:37:16.477619 | orchestrator | changed: [testbed-manager] 2026-01-03 00:37:16.477631 | orchestrator | 2026-01-03 00:37:16.477641 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-01-03 00:37:16.477652 | orchestrator | Saturday 03 January 2026 00:37:07 +0000 (0:00:05.914) 0:00:07.261 ****** 2026-01-03 00:37:16.477663 | orchestrator | changed: [testbed-manager] 2026-01-03 00:37:16.477674 | orchestrator | 2026-01-03 00:37:16.477684 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-01-03 00:37:16.477695 | orchestrator | Saturday 03 January 2026 00:37:08 +0000 (0:00:00.555) 0:00:07.817 ****** 2026-01-03 00:37:16.477706 | orchestrator | changed: [testbed-manager] 2026-01-03 00:37:16.477716 | orchestrator | 2026-01-03 00:37:16.477727 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-01-03 00:37:16.477738 | orchestrator | Saturday 03 January 2026 00:37:08 +0000 (0:00:00.413) 0:00:08.230 ****** 2026-01-03 00:37:16.477748 | orchestrator | ok: [testbed-manager] 2026-01-03 00:37:16.477759 | orchestrator | 2026-01-03 00:37:16.477770 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-01-03 00:37:16.477781 | orchestrator | Saturday 03 January 2026 00:37:09 +0000 (0:00:00.623) 0:00:08.853 ****** 2026-01-03 00:37:16.477792 | orchestrator | ok: [testbed-manager] 2026-01-03 00:37:16.477817 | orchestrator | 2026-01-03 00:37:16.477828 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-01-03 00:37:16.477839 | orchestrator | Saturday 03 January 2026 00:37:10 +0000 (0:00:00.420) 0:00:09.273 ****** 2026-01-03 00:37:16.477849 | orchestrator | ok: [testbed-manager] 2026-01-03 00:37:16.477860 | orchestrator | 2026-01-03 00:37:16.477872 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-01-03 00:37:16.477883 | orchestrator | Saturday 03 January 2026 00:37:10 +0000 (0:00:00.460) 0:00:09.734 ****** 2026-01-03 00:37:16.477894 | orchestrator | changed: [testbed-manager] 2026-01-03 00:37:16.477905 | orchestrator | 2026-01-03 00:37:16.477915 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-01-03 00:37:16.477926 | orchestrator | Saturday 03 January 2026 00:37:11 +0000 (0:00:01.141) 0:00:10.876 ****** 2026-01-03 00:37:16.477937 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-03 00:37:16.477961 | orchestrator | changed: [testbed-manager] 2026-01-03 00:37:16.477972 | orchestrator | 2026-01-03 00:37:16.478005 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-01-03 00:37:16.478066 | orchestrator | Saturday 03 January 2026 00:37:12 +0000 (0:00:00.897) 0:00:11.774 ****** 2026-01-03 00:37:16.478079 | orchestrator | changed: [testbed-manager] 2026-01-03 00:37:16.478102 | orchestrator | 2026-01-03 00:37:16.478113 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-01-03 00:37:16.478124 | orchestrator | Saturday 03 January 2026 00:37:14 +0000 (0:00:01.675) 0:00:13.449 ****** 2026-01-03 00:37:16.478135 | orchestrator | changed: [testbed-manager] 2026-01-03 00:37:16.478145 | orchestrator | 2026-01-03 00:37:16.478156 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:37:16.478185 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:37:16.478198 | orchestrator | 2026-01-03 00:37:16.478209 | orchestrator | 2026-01-03 00:37:16.478220 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:37:16.478244 | orchestrator | Saturday 03 January 2026 00:37:16 +0000 (0:00:01.950) 0:00:15.400 ****** 2026-01-03 00:37:16.478255 | orchestrator | =============================================================================== 2026-01-03 00:37:16.478266 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.91s 2026-01-03 00:37:16.478276 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.95s 2026-01-03 00:37:16.478287 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.68s 2026-01-03 00:37:16.478298 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.19s 2026-01-03 00:37:16.478309 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.14s 2026-01-03 00:37:16.478320 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.90s 2026-01-03 00:37:16.478330 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.62s 2026-01-03 00:37:16.478341 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2026-01-03 00:37:16.478352 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.46s 2026-01-03 00:37:16.478363 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.42s 2026-01-03 00:37:16.478374 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.41s 2026-01-03 00:37:16.756133 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-01-03 00:37:16.795212 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-01-03 00:37:16.795293 | orchestrator | Dload Upload Total Spent Left Speed 2026-01-03 00:37:16.867747 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 191 0 --:--:-- --:--:-- --:--:-- 194 2026-01-03 00:37:16.884133 | orchestrator | + osism apply --environment custom workarounds 2026-01-03 00:37:18.828224 | orchestrator | 2026-01-03 00:37:18 | INFO  | Trying to run play workarounds in environment custom 2026-01-03 00:37:29.000685 | orchestrator | 2026-01-03 00:37:28 | INFO  | Task b13ce18f-9a0c-4d28-a2e4-4ec7c1d7414d (workarounds) was prepared for execution. 2026-01-03 00:37:29.000798 | orchestrator | 2026-01-03 00:37:28 | INFO  | It takes a moment until task b13ce18f-9a0c-4d28-a2e4-4ec7c1d7414d (workarounds) has been started and output is visible here. 2026-01-03 00:37:53.044783 | orchestrator | 2026-01-03 00:37:53.044891 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:37:53.044908 | orchestrator | 2026-01-03 00:37:53.044921 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-01-03 00:37:53.044933 | orchestrator | Saturday 03 January 2026 00:37:32 +0000 (0:00:00.091) 0:00:00.091 ****** 2026-01-03 00:37:53.044988 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-01-03 00:37:53.045000 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-01-03 00:37:53.045011 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-01-03 00:37:53.045022 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-01-03 00:37:53.045033 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-01-03 00:37:53.045044 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-01-03 00:37:53.045055 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-01-03 00:37:53.045066 | orchestrator | 2026-01-03 00:37:53.045077 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-01-03 00:37:53.045088 | orchestrator | 2026-01-03 00:37:53.045099 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-03 00:37:53.045109 | orchestrator | Saturday 03 January 2026 00:37:33 +0000 (0:00:00.608) 0:00:00.700 ****** 2026-01-03 00:37:53.045121 | orchestrator | ok: [testbed-manager] 2026-01-03 00:37:53.045154 | orchestrator | 2026-01-03 00:37:53.045165 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-01-03 00:37:53.045176 | orchestrator | 2026-01-03 00:37:53.045186 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-03 00:37:53.045197 | orchestrator | Saturday 03 January 2026 00:37:35 +0000 (0:00:02.139) 0:00:02.840 ****** 2026-01-03 00:37:53.045208 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:37:53.045219 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:37:53.045229 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:37:53.045240 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:37:53.045251 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:37:53.045261 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:37:53.045272 | orchestrator | 2026-01-03 00:37:53.045283 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-01-03 00:37:53.045293 | orchestrator | 2026-01-03 00:37:53.045304 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-01-03 00:37:53.045315 | orchestrator | Saturday 03 January 2026 00:37:37 +0000 (0:00:01.798) 0:00:04.639 ****** 2026-01-03 00:37:53.045327 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-03 00:37:53.045341 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-03 00:37:53.045353 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-03 00:37:53.045366 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-03 00:37:53.045378 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-03 00:37:53.045391 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-03 00:37:53.045404 | orchestrator | 2026-01-03 00:37:53.045417 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-01-03 00:37:53.045430 | orchestrator | Saturday 03 January 2026 00:37:38 +0000 (0:00:01.493) 0:00:06.132 ****** 2026-01-03 00:37:53.045443 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:37:53.045457 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:37:53.045469 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:37:53.045481 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:37:53.045494 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:37:53.045506 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:37:53.045518 | orchestrator | 2026-01-03 00:37:53.045531 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-01-03 00:37:53.045544 | orchestrator | Saturday 03 January 2026 00:37:42 +0000 (0:00:03.133) 0:00:09.265 ****** 2026-01-03 00:37:53.045557 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:37:53.045570 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:37:53.045582 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:37:53.045594 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:37:53.045608 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:37:53.045621 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:37:53.045633 | orchestrator | 2026-01-03 00:37:53.045646 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-01-03 00:37:53.045659 | orchestrator | 2026-01-03 00:37:53.045671 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-01-03 00:37:53.045684 | orchestrator | Saturday 03 January 2026 00:37:42 +0000 (0:00:00.655) 0:00:09.921 ****** 2026-01-03 00:37:53.045695 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:37:53.045706 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:37:53.045716 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:37:53.045730 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:37:53.045758 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:37:53.045777 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:37:53.045795 | orchestrator | changed: [testbed-manager] 2026-01-03 00:37:53.045824 | orchestrator | 2026-01-03 00:37:53.045841 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-01-03 00:37:53.045858 | orchestrator | Saturday 03 January 2026 00:37:44 +0000 (0:00:01.613) 0:00:11.534 ****** 2026-01-03 00:37:53.045875 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:37:53.045892 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:37:53.045908 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:37:53.045925 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:37:53.045973 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:37:53.045989 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:37:53.046081 | orchestrator | changed: [testbed-manager] 2026-01-03 00:37:53.046106 | orchestrator | 2026-01-03 00:37:53.046117 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-01-03 00:37:53.046129 | orchestrator | Saturday 03 January 2026 00:37:45 +0000 (0:00:01.593) 0:00:13.127 ****** 2026-01-03 00:37:53.046140 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:37:53.046151 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:37:53.046161 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:37:53.046172 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:37:53.046183 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:37:53.046194 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:37:53.046204 | orchestrator | ok: [testbed-manager] 2026-01-03 00:37:53.046215 | orchestrator | 2026-01-03 00:37:53.046226 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-01-03 00:37:53.046237 | orchestrator | Saturday 03 January 2026 00:37:47 +0000 (0:00:01.563) 0:00:14.691 ****** 2026-01-03 00:37:53.046248 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:37:53.046259 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:37:53.046270 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:37:53.046281 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:37:53.046292 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:37:53.046303 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:37:53.046314 | orchestrator | changed: [testbed-manager] 2026-01-03 00:37:53.046324 | orchestrator | 2026-01-03 00:37:53.046335 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-01-03 00:37:53.046346 | orchestrator | Saturday 03 January 2026 00:37:49 +0000 (0:00:01.785) 0:00:16.476 ****** 2026-01-03 00:37:53.046357 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:37:53.046368 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:37:53.046379 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:37:53.046390 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:37:53.046400 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:37:53.046411 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:37:53.046422 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:37:53.046433 | orchestrator | 2026-01-03 00:37:53.046444 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-01-03 00:37:53.046455 | orchestrator | 2026-01-03 00:37:53.046466 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-01-03 00:37:53.046476 | orchestrator | Saturday 03 January 2026 00:37:49 +0000 (0:00:00.612) 0:00:17.088 ****** 2026-01-03 00:37:53.046487 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:37:53.046498 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:37:53.046509 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:37:53.046520 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:37:53.046531 | orchestrator | ok: [testbed-manager] 2026-01-03 00:37:53.046542 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:37:53.046552 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:37:53.046563 | orchestrator | 2026-01-03 00:37:53.046574 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:37:53.046586 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:37:53.046599 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:37:53.046621 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:37:53.046632 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:37:53.046643 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:37:53.046654 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:37:53.046665 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:37:53.046676 | orchestrator | 2026-01-03 00:37:53.046687 | orchestrator | 2026-01-03 00:37:53.046698 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:37:53.046709 | orchestrator | Saturday 03 January 2026 00:37:53 +0000 (0:00:03.086) 0:00:20.175 ****** 2026-01-03 00:37:53.046720 | orchestrator | =============================================================================== 2026-01-03 00:37:53.046731 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.13s 2026-01-03 00:37:53.046742 | orchestrator | Install python3-docker -------------------------------------------------- 3.09s 2026-01-03 00:37:53.046753 | orchestrator | Apply netplan configuration --------------------------------------------- 2.14s 2026-01-03 00:37:53.046764 | orchestrator | Apply netplan configuration --------------------------------------------- 1.80s 2026-01-03 00:37:53.046781 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.79s 2026-01-03 00:37:53.046792 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.61s 2026-01-03 00:37:53.046803 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.59s 2026-01-03 00:37:53.046814 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.56s 2026-01-03 00:37:53.046825 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.49s 2026-01-03 00:37:53.046835 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.66s 2026-01-03 00:37:53.046846 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.61s 2026-01-03 00:37:53.046865 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.61s 2026-01-03 00:37:53.643896 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-01-03 00:38:05.633662 | orchestrator | 2026-01-03 00:38:05 | INFO  | Task 5151ecb5-747e-4266-bead-06ca11c0ada1 (reboot) was prepared for execution. 2026-01-03 00:38:05.633738 | orchestrator | 2026-01-03 00:38:05 | INFO  | It takes a moment until task 5151ecb5-747e-4266-bead-06ca11c0ada1 (reboot) has been started and output is visible here. 2026-01-03 00:38:15.391134 | orchestrator | 2026-01-03 00:38:15.391228 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-03 00:38:15.391239 | orchestrator | 2026-01-03 00:38:15.391246 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-03 00:38:15.391253 | orchestrator | Saturday 03 January 2026 00:38:09 +0000 (0:00:00.148) 0:00:00.148 ****** 2026-01-03 00:38:15.391260 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:38:15.391268 | orchestrator | 2026-01-03 00:38:15.391274 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-03 00:38:15.391281 | orchestrator | Saturday 03 January 2026 00:38:09 +0000 (0:00:00.082) 0:00:00.230 ****** 2026-01-03 00:38:15.391288 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:38:15.391294 | orchestrator | 2026-01-03 00:38:15.391300 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-03 00:38:15.391327 | orchestrator | Saturday 03 January 2026 00:38:10 +0000 (0:00:00.911) 0:00:01.142 ****** 2026-01-03 00:38:15.391334 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:38:15.391340 | orchestrator | 2026-01-03 00:38:15.391346 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-03 00:38:15.391352 | orchestrator | 2026-01-03 00:38:15.391359 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-03 00:38:15.391365 | orchestrator | Saturday 03 January 2026 00:38:10 +0000 (0:00:00.097) 0:00:01.239 ****** 2026-01-03 00:38:15.391371 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:38:15.391377 | orchestrator | 2026-01-03 00:38:15.391383 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-03 00:38:15.391390 | orchestrator | Saturday 03 January 2026 00:38:10 +0000 (0:00:00.089) 0:00:01.329 ****** 2026-01-03 00:38:15.391396 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:38:15.391402 | orchestrator | 2026-01-03 00:38:15.391408 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-03 00:38:15.391414 | orchestrator | Saturday 03 January 2026 00:38:11 +0000 (0:00:00.697) 0:00:02.026 ****** 2026-01-03 00:38:15.391421 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:38:15.391427 | orchestrator | 2026-01-03 00:38:15.391433 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-03 00:38:15.391439 | orchestrator | 2026-01-03 00:38:15.391445 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-03 00:38:15.391451 | orchestrator | Saturday 03 January 2026 00:38:11 +0000 (0:00:00.109) 0:00:02.135 ****** 2026-01-03 00:38:15.391457 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:38:15.391463 | orchestrator | 2026-01-03 00:38:15.391469 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-03 00:38:15.391475 | orchestrator | Saturday 03 January 2026 00:38:11 +0000 (0:00:00.166) 0:00:02.302 ****** 2026-01-03 00:38:15.391482 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:38:15.391488 | orchestrator | 2026-01-03 00:38:15.391494 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-03 00:38:15.391500 | orchestrator | Saturday 03 January 2026 00:38:12 +0000 (0:00:00.676) 0:00:02.978 ****** 2026-01-03 00:38:15.391506 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:38:15.391512 | orchestrator | 2026-01-03 00:38:15.391518 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-03 00:38:15.391524 | orchestrator | 2026-01-03 00:38:15.391530 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-03 00:38:15.391536 | orchestrator | Saturday 03 January 2026 00:38:12 +0000 (0:00:00.099) 0:00:03.078 ****** 2026-01-03 00:38:15.391543 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:38:15.391549 | orchestrator | 2026-01-03 00:38:15.391555 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-03 00:38:15.391561 | orchestrator | Saturday 03 January 2026 00:38:12 +0000 (0:00:00.082) 0:00:03.160 ****** 2026-01-03 00:38:15.391567 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:38:15.391573 | orchestrator | 2026-01-03 00:38:15.391579 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-03 00:38:15.391585 | orchestrator | Saturday 03 January 2026 00:38:13 +0000 (0:00:00.664) 0:00:03.824 ****** 2026-01-03 00:38:15.391591 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:38:15.391597 | orchestrator | 2026-01-03 00:38:15.391603 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-03 00:38:15.391610 | orchestrator | 2026-01-03 00:38:15.391616 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-03 00:38:15.391622 | orchestrator | Saturday 03 January 2026 00:38:13 +0000 (0:00:00.101) 0:00:03.926 ****** 2026-01-03 00:38:15.391628 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:38:15.391634 | orchestrator | 2026-01-03 00:38:15.391641 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-03 00:38:15.391665 | orchestrator | Saturday 03 January 2026 00:38:13 +0000 (0:00:00.092) 0:00:04.018 ****** 2026-01-03 00:38:15.391673 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:38:15.391680 | orchestrator | 2026-01-03 00:38:15.391688 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-03 00:38:15.391695 | orchestrator | Saturday 03 January 2026 00:38:14 +0000 (0:00:00.722) 0:00:04.741 ****** 2026-01-03 00:38:15.391702 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:38:15.391709 | orchestrator | 2026-01-03 00:38:15.391717 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-03 00:38:15.391724 | orchestrator | 2026-01-03 00:38:15.391731 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-03 00:38:15.391738 | orchestrator | Saturday 03 January 2026 00:38:14 +0000 (0:00:00.109) 0:00:04.851 ****** 2026-01-03 00:38:15.391744 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:38:15.391750 | orchestrator | 2026-01-03 00:38:15.391756 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-03 00:38:15.391762 | orchestrator | Saturday 03 January 2026 00:38:14 +0000 (0:00:00.111) 0:00:04.962 ****** 2026-01-03 00:38:15.391769 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:38:15.391775 | orchestrator | 2026-01-03 00:38:15.391781 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-03 00:38:15.391787 | orchestrator | Saturday 03 January 2026 00:38:15 +0000 (0:00:00.692) 0:00:05.655 ****** 2026-01-03 00:38:15.391806 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:38:15.391813 | orchestrator | 2026-01-03 00:38:15.391819 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:38:15.391827 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:38:15.391834 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:38:15.391840 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:38:15.391846 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:38:15.391852 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:38:15.391858 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:38:15.391865 | orchestrator | 2026-01-03 00:38:15.391871 | orchestrator | 2026-01-03 00:38:15.391877 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:38:15.391883 | orchestrator | Saturday 03 January 2026 00:38:15 +0000 (0:00:00.035) 0:00:05.691 ****** 2026-01-03 00:38:15.391890 | orchestrator | =============================================================================== 2026-01-03 00:38:15.391896 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.36s 2026-01-03 00:38:15.391902 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.63s 2026-01-03 00:38:15.391908 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.55s 2026-01-03 00:38:15.682304 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-01-03 00:38:27.750219 | orchestrator | 2026-01-03 00:38:27 | INFO  | Task ad4690f4-6848-4344-9b84-94e7e2acd274 (wait-for-connection) was prepared for execution. 2026-01-03 00:38:27.750354 | orchestrator | 2026-01-03 00:38:27 | INFO  | It takes a moment until task ad4690f4-6848-4344-9b84-94e7e2acd274 (wait-for-connection) has been started and output is visible here. 2026-01-03 00:38:43.662328 | orchestrator | 2026-01-03 00:38:43.662458 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-01-03 00:38:43.662473 | orchestrator | 2026-01-03 00:38:43.662485 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-01-03 00:38:43.662495 | orchestrator | Saturday 03 January 2026 00:38:31 +0000 (0:00:00.201) 0:00:00.201 ****** 2026-01-03 00:38:43.662505 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:38:43.662516 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:38:43.662526 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:38:43.662536 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:38:43.662545 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:38:43.662555 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:38:43.662564 | orchestrator | 2026-01-03 00:38:43.662574 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:38:43.662584 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:38:43.662595 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:38:43.662605 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:38:43.662615 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:38:43.662639 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:38:43.662649 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:38:43.662659 | orchestrator | 2026-01-03 00:38:43.662668 | orchestrator | 2026-01-03 00:38:43.662678 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:38:43.662687 | orchestrator | Saturday 03 January 2026 00:38:43 +0000 (0:00:11.536) 0:00:11.738 ****** 2026-01-03 00:38:43.662697 | orchestrator | =============================================================================== 2026-01-03 00:38:43.662707 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.54s 2026-01-03 00:38:43.967262 | orchestrator | + osism apply hddtemp 2026-01-03 00:38:56.208939 | orchestrator | 2026-01-03 00:38:56 | INFO  | Task 1ca2da81-60fb-4f2c-97e8-d417469b9994 (hddtemp) was prepared for execution. 2026-01-03 00:38:56.209050 | orchestrator | 2026-01-03 00:38:56 | INFO  | It takes a moment until task 1ca2da81-60fb-4f2c-97e8-d417469b9994 (hddtemp) has been started and output is visible here. 2026-01-03 00:39:24.871534 | orchestrator | 2026-01-03 00:39:24.871647 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-01-03 00:39:24.871663 | orchestrator | 2026-01-03 00:39:24.871676 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-01-03 00:39:24.871687 | orchestrator | Saturday 03 January 2026 00:39:00 +0000 (0:00:00.261) 0:00:00.261 ****** 2026-01-03 00:39:24.871698 | orchestrator | ok: [testbed-manager] 2026-01-03 00:39:24.871711 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:39:24.871722 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:39:24.871733 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:39:24.871743 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:39:24.871754 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:39:24.871766 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:39:24.871777 | orchestrator | 2026-01-03 00:39:24.871788 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-01-03 00:39:24.871799 | orchestrator | Saturday 03 January 2026 00:39:01 +0000 (0:00:00.700) 0:00:00.962 ****** 2026-01-03 00:39:24.871811 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:39:24.871848 | orchestrator | 2026-01-03 00:39:24.871907 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-01-03 00:39:24.871919 | orchestrator | Saturday 03 January 2026 00:39:02 +0000 (0:00:01.171) 0:00:02.133 ****** 2026-01-03 00:39:24.871930 | orchestrator | ok: [testbed-manager] 2026-01-03 00:39:24.871941 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:39:24.871952 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:39:24.871962 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:39:24.871973 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:39:24.871984 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:39:24.871994 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:39:24.872005 | orchestrator | 2026-01-03 00:39:24.872016 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-01-03 00:39:24.872027 | orchestrator | Saturday 03 January 2026 00:39:04 +0000 (0:00:02.254) 0:00:04.387 ****** 2026-01-03 00:39:24.872038 | orchestrator | changed: [testbed-manager] 2026-01-03 00:39:24.872050 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:39:24.872061 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:39:24.872075 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:39:24.872088 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:39:24.872100 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:39:24.872113 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:39:24.872126 | orchestrator | 2026-01-03 00:39:24.872145 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-01-03 00:39:24.872164 | orchestrator | Saturday 03 January 2026 00:39:05 +0000 (0:00:01.199) 0:00:05.587 ****** 2026-01-03 00:39:24.872180 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:39:24.872193 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:39:24.872205 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:39:24.872218 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:39:24.872231 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:39:24.872244 | orchestrator | ok: [testbed-manager] 2026-01-03 00:39:24.872256 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:39:24.872268 | orchestrator | 2026-01-03 00:39:24.872281 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-01-03 00:39:24.872294 | orchestrator | Saturday 03 January 2026 00:39:06 +0000 (0:00:01.157) 0:00:06.745 ****** 2026-01-03 00:39:24.872307 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:39:24.872319 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:39:24.872331 | orchestrator | changed: [testbed-manager] 2026-01-03 00:39:24.872345 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:39:24.872357 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:39:24.872370 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:39:24.872382 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:39:24.872396 | orchestrator | 2026-01-03 00:39:24.872408 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-01-03 00:39:24.872421 | orchestrator | Saturday 03 January 2026 00:39:07 +0000 (0:00:00.816) 0:00:07.562 ****** 2026-01-03 00:39:24.872435 | orchestrator | changed: [testbed-manager] 2026-01-03 00:39:24.872447 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:39:24.872458 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:39:24.872469 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:39:24.872480 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:39:24.872491 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:39:24.872502 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:39:24.872513 | orchestrator | 2026-01-03 00:39:24.872524 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-01-03 00:39:24.872535 | orchestrator | Saturday 03 January 2026 00:39:21 +0000 (0:00:13.595) 0:00:21.158 ****** 2026-01-03 00:39:24.872560 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:39:24.872581 | orchestrator | 2026-01-03 00:39:24.872592 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-01-03 00:39:24.872603 | orchestrator | Saturday 03 January 2026 00:39:22 +0000 (0:00:01.370) 0:00:22.528 ****** 2026-01-03 00:39:24.872614 | orchestrator | changed: [testbed-manager] 2026-01-03 00:39:24.872624 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:39:24.872635 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:39:24.872646 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:39:24.872657 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:39:24.872668 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:39:24.872678 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:39:24.872689 | orchestrator | 2026-01-03 00:39:24.872700 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:39:24.872711 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:39:24.872742 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:39:24.872754 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:39:24.872766 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:39:24.872777 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:39:24.872788 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:39:24.872798 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:39:24.872809 | orchestrator | 2026-01-03 00:39:24.872820 | orchestrator | 2026-01-03 00:39:24.872831 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:39:24.872842 | orchestrator | Saturday 03 January 2026 00:39:24 +0000 (0:00:01.841) 0:00:24.370 ****** 2026-01-03 00:39:24.872853 | orchestrator | =============================================================================== 2026-01-03 00:39:24.872883 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.60s 2026-01-03 00:39:24.872894 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.25s 2026-01-03 00:39:24.872905 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.84s 2026-01-03 00:39:24.872916 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.37s 2026-01-03 00:39:24.872927 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.20s 2026-01-03 00:39:24.872938 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.17s 2026-01-03 00:39:24.872949 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.16s 2026-01-03 00:39:24.872960 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.82s 2026-01-03 00:39:24.872971 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.70s 2026-01-03 00:39:25.162615 | orchestrator | ++ semver latest 7.1.1 2026-01-03 00:39:25.224494 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-03 00:39:25.224592 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-03 00:39:25.224609 | orchestrator | + sudo systemctl restart manager.service 2026-01-03 00:39:38.866474 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-03 00:39:38.866606 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-03 00:39:38.866635 | orchestrator | + local max_attempts=60 2026-01-03 00:39:38.866654 | orchestrator | + local name=ceph-ansible 2026-01-03 00:39:38.866673 | orchestrator | + local attempt_num=1 2026-01-03 00:39:38.867633 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:39:38.908639 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:39:38.908770 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:39:38.908793 | orchestrator | + sleep 5 2026-01-03 00:39:43.914613 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:39:44.070499 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:39:44.070581 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:39:44.070593 | orchestrator | + sleep 5 2026-01-03 00:39:49.074203 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:39:49.107617 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:39:49.107686 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:39:49.107691 | orchestrator | + sleep 5 2026-01-03 00:39:54.112898 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:39:54.150861 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:39:54.150950 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:39:54.150963 | orchestrator | + sleep 5 2026-01-03 00:39:59.156304 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:39:59.195156 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:39:59.195255 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:39:59.195271 | orchestrator | + sleep 5 2026-01-03 00:40:04.200138 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:40:04.236438 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:40:04.236538 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:40:04.236555 | orchestrator | + sleep 5 2026-01-03 00:40:09.241144 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:40:09.279478 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:40:09.279554 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:40:09.279568 | orchestrator | + sleep 5 2026-01-03 00:40:14.284479 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:40:14.315402 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-03 00:40:14.315490 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:40:14.315504 | orchestrator | + sleep 5 2026-01-03 00:40:19.316123 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:40:19.338493 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-03 00:40:19.338577 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:40:19.338591 | orchestrator | + sleep 5 2026-01-03 00:40:24.341682 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:40:24.379405 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-03 00:40:24.379497 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:40:24.379513 | orchestrator | + sleep 5 2026-01-03 00:40:29.383430 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:40:29.425148 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-03 00:40:29.425214 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:40:29.425221 | orchestrator | + sleep 5 2026-01-03 00:40:34.431044 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:40:34.469244 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-03 00:40:34.469337 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:40:34.469353 | orchestrator | + sleep 5 2026-01-03 00:40:39.474441 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:40:39.513985 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-03 00:40:39.514106 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-03 00:40:39.514113 | orchestrator | + sleep 5 2026-01-03 00:40:44.518888 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-03 00:40:44.554348 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:40:44.554452 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-03 00:40:44.554469 | orchestrator | + local max_attempts=60 2026-01-03 00:40:44.554481 | orchestrator | + local name=kolla-ansible 2026-01-03 00:40:44.554492 | orchestrator | + local attempt_num=1 2026-01-03 00:40:44.554740 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-03 00:40:44.582185 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:40:44.582441 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-03 00:40:44.582464 | orchestrator | + local max_attempts=60 2026-01-03 00:40:44.582476 | orchestrator | + local name=osism-ansible 2026-01-03 00:40:44.582487 | orchestrator | + local attempt_num=1 2026-01-03 00:40:44.582859 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-03 00:40:44.616414 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-03 00:40:44.616492 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-03 00:40:44.616683 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-03 00:40:44.786482 | orchestrator | ARA in ceph-ansible already disabled. 2026-01-03 00:40:44.933609 | orchestrator | ARA in kolla-ansible already disabled. 2026-01-03 00:40:45.087727 | orchestrator | ARA in osism-ansible already disabled. 2026-01-03 00:40:45.231907 | orchestrator | ARA in osism-kubernetes already disabled. 2026-01-03 00:40:45.232881 | orchestrator | + osism apply gather-facts 2026-01-03 00:40:57.329949 | orchestrator | 2026-01-03 00:40:57 | INFO  | Task 54846620-271f-4fe0-a799-1d7f4afbe501 (gather-facts) was prepared for execution. 2026-01-03 00:40:57.330077 | orchestrator | 2026-01-03 00:40:57 | INFO  | It takes a moment until task 54846620-271f-4fe0-a799-1d7f4afbe501 (gather-facts) has been started and output is visible here. 2026-01-03 00:41:10.532706 | orchestrator | 2026-01-03 00:41:10.532869 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-03 00:41:10.532888 | orchestrator | 2026-01-03 00:41:10.532900 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-03 00:41:10.532912 | orchestrator | Saturday 03 January 2026 00:41:01 +0000 (0:00:00.193) 0:00:00.193 ****** 2026-01-03 00:41:10.532924 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:41:10.532936 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:41:10.532947 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:41:10.532959 | orchestrator | ok: [testbed-manager] 2026-01-03 00:41:10.532969 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:41:10.532981 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:41:10.532992 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:41:10.533003 | orchestrator | 2026-01-03 00:41:10.533014 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-03 00:41:10.533025 | orchestrator | 2026-01-03 00:41:10.533036 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-03 00:41:10.533047 | orchestrator | Saturday 03 January 2026 00:41:09 +0000 (0:00:08.453) 0:00:08.647 ****** 2026-01-03 00:41:10.533058 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:41:10.533069 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:41:10.533080 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:41:10.533091 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:41:10.533102 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:10.533113 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:41:10.533124 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:41:10.533134 | orchestrator | 2026-01-03 00:41:10.533145 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:41:10.533156 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:41:10.533169 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:41:10.533180 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:41:10.533191 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:41:10.533202 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:41:10.533213 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:41:10.533277 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 00:41:10.533300 | orchestrator | 2026-01-03 00:41:10.533318 | orchestrator | 2026-01-03 00:41:10.533337 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:41:10.533357 | orchestrator | Saturday 03 January 2026 00:41:10 +0000 (0:00:00.510) 0:00:09.158 ****** 2026-01-03 00:41:10.533376 | orchestrator | =============================================================================== 2026-01-03 00:41:10.533394 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.45s 2026-01-03 00:41:10.533412 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2026-01-03 00:41:10.828402 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-01-03 00:41:10.839711 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-01-03 00:41:10.849208 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-01-03 00:41:10.862335 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-01-03 00:41:10.871650 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-01-03 00:41:10.883646 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-01-03 00:41:10.898195 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-01-03 00:41:10.908534 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-01-03 00:41:10.920798 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-01-03 00:41:10.932466 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-01-03 00:41:10.947102 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-01-03 00:41:10.961135 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-01-03 00:41:10.970306 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-01-03 00:41:10.981980 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-01-03 00:41:11.001030 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-01-03 00:41:11.012409 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-01-03 00:41:11.027519 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-01-03 00:41:11.039663 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-01-03 00:41:11.052287 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-01-03 00:41:11.067578 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-01-03 00:41:11.080139 | orchestrator | + [[ false == \t\r\u\e ]] 2026-01-03 00:41:11.448792 | orchestrator | ok: Runtime: 0:24:05.991908 2026-01-03 00:41:11.567963 | 2026-01-03 00:41:11.568165 | TASK [Deploy services] 2026-01-03 00:41:12.102402 | orchestrator | skipping: Conditional result was False 2026-01-03 00:41:12.121513 | 2026-01-03 00:41:12.121733 | TASK [Deploy in a nutshell] 2026-01-03 00:41:12.876018 | orchestrator | + set -e 2026-01-03 00:41:12.877520 | orchestrator | 2026-01-03 00:41:12.877556 | orchestrator | # PULL IMAGES 2026-01-03 00:41:12.877570 | orchestrator | 2026-01-03 00:41:12.877589 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-03 00:41:12.877609 | orchestrator | ++ export INTERACTIVE=false 2026-01-03 00:41:12.877622 | orchestrator | ++ INTERACTIVE=false 2026-01-03 00:41:12.877664 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-03 00:41:12.877685 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-03 00:41:12.877698 | orchestrator | + source /opt/manager-vars.sh 2026-01-03 00:41:12.877709 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-03 00:41:12.877725 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-03 00:41:12.877735 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-03 00:41:12.877751 | orchestrator | ++ CEPH_VERSION=reef 2026-01-03 00:41:12.877789 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-03 00:41:12.877806 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-03 00:41:12.877816 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-03 00:41:12.877830 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-03 00:41:12.877840 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-03 00:41:12.877851 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-03 00:41:12.877861 | orchestrator | ++ export ARA=false 2026-01-03 00:41:12.877871 | orchestrator | ++ ARA=false 2026-01-03 00:41:12.877881 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-03 00:41:12.877891 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-03 00:41:12.877900 | orchestrator | ++ export TEMPEST=true 2026-01-03 00:41:12.877910 | orchestrator | ++ TEMPEST=true 2026-01-03 00:41:12.877919 | orchestrator | ++ export IS_ZUUL=true 2026-01-03 00:41:12.877929 | orchestrator | ++ IS_ZUUL=true 2026-01-03 00:41:12.877939 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2026-01-03 00:41:12.877949 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2026-01-03 00:41:12.877958 | orchestrator | ++ export EXTERNAL_API=false 2026-01-03 00:41:12.877968 | orchestrator | ++ EXTERNAL_API=false 2026-01-03 00:41:12.877978 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-03 00:41:12.877989 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-03 00:41:12.878006 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-03 00:41:12.878073 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-03 00:41:12.878092 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-03 00:41:12.878109 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-03 00:41:12.878128 | orchestrator | + echo 2026-01-03 00:41:12.878145 | orchestrator | + echo '# PULL IMAGES' 2026-01-03 00:41:12.878163 | orchestrator | + echo 2026-01-03 00:41:12.878190 | orchestrator | ++ semver latest 7.0.0 2026-01-03 00:41:12.934967 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-03 00:41:12.935085 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-03 00:41:12.935100 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-01-03 00:41:14.881752 | orchestrator | 2026-01-03 00:41:14 | INFO  | Trying to run play pull-images in environment custom 2026-01-03 00:41:25.063342 | orchestrator | 2026-01-03 00:41:25 | INFO  | Task 84731421-d7b3-422b-8022-3924b79a893d (pull-images) was prepared for execution. 2026-01-03 00:41:25.063458 | orchestrator | 2026-01-03 00:41:25 | INFO  | Task 84731421-d7b3-422b-8022-3924b79a893d is running in background. No more output. Check ARA for logs. 2026-01-03 00:41:27.374134 | orchestrator | 2026-01-03 00:41:27 | INFO  | Trying to run play wipe-partitions in environment custom 2026-01-03 00:41:37.551031 | orchestrator | 2026-01-03 00:41:37 | INFO  | Task edf6e6f8-e3f2-461e-b406-306daa73e590 (wipe-partitions) was prepared for execution. 2026-01-03 00:41:37.551141 | orchestrator | 2026-01-03 00:41:37 | INFO  | It takes a moment until task edf6e6f8-e3f2-461e-b406-306daa73e590 (wipe-partitions) has been started and output is visible here. 2026-01-03 00:41:50.036465 | orchestrator | 2026-01-03 00:41:50.036555 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-01-03 00:41:50.036566 | orchestrator | 2026-01-03 00:41:50.036574 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-01-03 00:41:50.036586 | orchestrator | Saturday 03 January 2026 00:41:41 +0000 (0:00:00.135) 0:00:00.135 ****** 2026-01-03 00:41:50.036594 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:41:50.036601 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:41:50.036609 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:41:50.036616 | orchestrator | 2026-01-03 00:41:50.036623 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-01-03 00:41:50.036649 | orchestrator | Saturday 03 January 2026 00:41:42 +0000 (0:00:00.588) 0:00:00.724 ****** 2026-01-03 00:41:50.036662 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:50.036673 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:41:50.036689 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:41:50.036700 | orchestrator | 2026-01-03 00:41:50.036710 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-01-03 00:41:50.036720 | orchestrator | Saturday 03 January 2026 00:41:42 +0000 (0:00:00.356) 0:00:01.080 ****** 2026-01-03 00:41:50.036828 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:41:50.036844 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:41:50.036854 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:41:50.036866 | orchestrator | 2026-01-03 00:41:50.036876 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-01-03 00:41:50.036883 | orchestrator | Saturday 03 January 2026 00:41:43 +0000 (0:00:00.585) 0:00:01.666 ****** 2026-01-03 00:41:50.036890 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:41:50.036896 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:41:50.036903 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:41:50.036910 | orchestrator | 2026-01-03 00:41:50.036917 | orchestrator | TASK [Check device availability] *********************************************** 2026-01-03 00:41:50.036924 | orchestrator | Saturday 03 January 2026 00:41:43 +0000 (0:00:00.293) 0:00:01.960 ****** 2026-01-03 00:41:50.036931 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-03 00:41:50.036941 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-03 00:41:50.036948 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-03 00:41:50.036955 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-03 00:41:50.036962 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-03 00:41:50.036969 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-03 00:41:50.036976 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-03 00:41:50.036982 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-03 00:41:50.036989 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-03 00:41:50.036996 | orchestrator | 2026-01-03 00:41:50.037003 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-01-03 00:41:50.037011 | orchestrator | Saturday 03 January 2026 00:41:44 +0000 (0:00:01.169) 0:00:03.130 ****** 2026-01-03 00:41:50.037018 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-01-03 00:41:50.037025 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-01-03 00:41:50.037032 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-01-03 00:41:50.037038 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-01-03 00:41:50.037045 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-01-03 00:41:50.037052 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-01-03 00:41:50.037058 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-01-03 00:41:50.037065 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-01-03 00:41:50.037072 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-01-03 00:41:50.037078 | orchestrator | 2026-01-03 00:41:50.037085 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-01-03 00:41:50.037092 | orchestrator | Saturday 03 January 2026 00:41:46 +0000 (0:00:01.489) 0:00:04.620 ****** 2026-01-03 00:41:50.037099 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-03 00:41:50.037106 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-03 00:41:50.037113 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-03 00:41:50.037119 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-03 00:41:50.037126 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-03 00:41:50.037133 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-03 00:41:50.037139 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-03 00:41:50.037155 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-03 00:41:50.037166 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-03 00:41:50.037174 | orchestrator | 2026-01-03 00:41:50.037180 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-01-03 00:41:50.037187 | orchestrator | Saturday 03 January 2026 00:41:48 +0000 (0:00:02.111) 0:00:06.732 ****** 2026-01-03 00:41:50.037194 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:41:50.037201 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:41:50.037208 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:41:50.037215 | orchestrator | 2026-01-03 00:41:50.037221 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-01-03 00:41:50.037228 | orchestrator | Saturday 03 January 2026 00:41:49 +0000 (0:00:00.621) 0:00:07.353 ****** 2026-01-03 00:41:50.037235 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:41:50.037242 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:41:50.037248 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:41:50.037255 | orchestrator | 2026-01-03 00:41:50.037262 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:41:50.037271 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:41:50.037279 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:41:50.037300 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:41:50.037308 | orchestrator | 2026-01-03 00:41:50.037315 | orchestrator | 2026-01-03 00:41:50.037321 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:41:50.037328 | orchestrator | Saturday 03 January 2026 00:41:49 +0000 (0:00:00.601) 0:00:07.954 ****** 2026-01-03 00:41:50.037335 | orchestrator | =============================================================================== 2026-01-03 00:41:50.037342 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.11s 2026-01-03 00:41:50.037349 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.49s 2026-01-03 00:41:50.037356 | orchestrator | Check device availability ----------------------------------------------- 1.17s 2026-01-03 00:41:50.037362 | orchestrator | Reload udev rules ------------------------------------------------------- 0.62s 2026-01-03 00:41:50.037369 | orchestrator | Request device events from the kernel ----------------------------------- 0.60s 2026-01-03 00:41:50.037376 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2026-01-03 00:41:50.037383 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.59s 2026-01-03 00:41:50.037389 | orchestrator | Remove all rook related logical devices --------------------------------- 0.36s 2026-01-03 00:41:50.037396 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.29s 2026-01-03 00:42:02.359280 | orchestrator | 2026-01-03 00:42:02 | INFO  | Task 4c62558a-653d-4ff4-b309-685ff4ae81f9 (facts) was prepared for execution. 2026-01-03 00:42:02.359412 | orchestrator | 2026-01-03 00:42:02 | INFO  | It takes a moment until task 4c62558a-653d-4ff4-b309-685ff4ae81f9 (facts) has been started and output is visible here. 2026-01-03 00:42:15.505233 | orchestrator | 2026-01-03 00:42:15.505347 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-03 00:42:15.505365 | orchestrator | 2026-01-03 00:42:15.505378 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-03 00:42:15.505390 | orchestrator | Saturday 03 January 2026 00:42:06 +0000 (0:00:00.256) 0:00:00.256 ****** 2026-01-03 00:42:15.505402 | orchestrator | ok: [testbed-manager] 2026-01-03 00:42:15.505414 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:42:15.505425 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:42:15.505462 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:42:15.505474 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:42:15.505485 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:42:15.505496 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:42:15.505507 | orchestrator | 2026-01-03 00:42:15.505518 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-03 00:42:15.505529 | orchestrator | Saturday 03 January 2026 00:42:07 +0000 (0:00:01.079) 0:00:01.336 ****** 2026-01-03 00:42:15.505540 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:42:15.505552 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:42:15.505563 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:42:15.505574 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:42:15.505585 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:15.505596 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:15.505607 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:15.505618 | orchestrator | 2026-01-03 00:42:15.505629 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-03 00:42:15.505641 | orchestrator | 2026-01-03 00:42:15.505676 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-03 00:42:15.505689 | orchestrator | Saturday 03 January 2026 00:42:08 +0000 (0:00:01.167) 0:00:02.504 ****** 2026-01-03 00:42:15.505700 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:42:15.505745 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:42:15.505765 | orchestrator | ok: [testbed-manager] 2026-01-03 00:42:15.505784 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:42:15.505802 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:42:15.505821 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:42:15.505839 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:42:15.505859 | orchestrator | 2026-01-03 00:42:15.505880 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-03 00:42:15.505899 | orchestrator | 2026-01-03 00:42:15.505916 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-03 00:42:15.505936 | orchestrator | Saturday 03 January 2026 00:42:14 +0000 (0:00:05.811) 0:00:08.315 ****** 2026-01-03 00:42:15.505956 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:42:15.505975 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:42:15.505996 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:42:15.506081 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:42:15.506103 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:15.506116 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:15.506129 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:15.506146 | orchestrator | 2026-01-03 00:42:15.506166 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:42:15.506183 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:42:15.506198 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:42:15.506217 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:42:15.506236 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:42:15.506255 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:42:15.506277 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:42:15.506304 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:42:15.506321 | orchestrator | 2026-01-03 00:42:15.506357 | orchestrator | 2026-01-03 00:42:15.506370 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:42:15.506381 | orchestrator | Saturday 03 January 2026 00:42:15 +0000 (0:00:00.484) 0:00:08.799 ****** 2026-01-03 00:42:15.506392 | orchestrator | =============================================================================== 2026-01-03 00:42:15.506403 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.81s 2026-01-03 00:42:15.506414 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.17s 2026-01-03 00:42:15.506426 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.08s 2026-01-03 00:42:15.506437 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.48s 2026-01-03 00:42:17.816209 | orchestrator | 2026-01-03 00:42:17 | INFO  | Task 26b9f41b-f6e9-4de6-a2cc-2709ffccaf1d (ceph-configure-lvm-volumes) was prepared for execution. 2026-01-03 00:42:17.816290 | orchestrator | 2026-01-03 00:42:17 | INFO  | It takes a moment until task 26b9f41b-f6e9-4de6-a2cc-2709ffccaf1d (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-01-03 00:42:28.676812 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-03 00:42:28.676884 | orchestrator | 2.16.14 2026-01-03 00:42:28.676892 | orchestrator | 2026-01-03 00:42:28.676897 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-03 00:42:28.676902 | orchestrator | 2026-01-03 00:42:28.676907 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-03 00:42:28.676911 | orchestrator | Saturday 03 January 2026 00:42:21 +0000 (0:00:00.283) 0:00:00.283 ****** 2026-01-03 00:42:28.676916 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-03 00:42:28.676920 | orchestrator | 2026-01-03 00:42:28.676924 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-03 00:42:28.676928 | orchestrator | Saturday 03 January 2026 00:42:22 +0000 (0:00:00.226) 0:00:00.510 ****** 2026-01-03 00:42:28.676932 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:42:28.676937 | orchestrator | 2026-01-03 00:42:28.676940 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:28.676944 | orchestrator | Saturday 03 January 2026 00:42:22 +0000 (0:00:00.202) 0:00:00.713 ****** 2026-01-03 00:42:28.676949 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-03 00:42:28.676958 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-03 00:42:28.676962 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-03 00:42:28.676966 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-03 00:42:28.676970 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-03 00:42:28.676974 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-03 00:42:28.676978 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-03 00:42:28.676982 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-03 00:42:28.676986 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-03 00:42:28.676989 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-03 00:42:28.676993 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-03 00:42:28.676997 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-03 00:42:28.677001 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-03 00:42:28.677005 | orchestrator | 2026-01-03 00:42:28.677008 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:28.677025 | orchestrator | Saturday 03 January 2026 00:42:22 +0000 (0:00:00.395) 0:00:01.108 ****** 2026-01-03 00:42:28.677029 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:28.677033 | orchestrator | 2026-01-03 00:42:28.677037 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:28.677041 | orchestrator | Saturday 03 January 2026 00:42:22 +0000 (0:00:00.183) 0:00:01.292 ****** 2026-01-03 00:42:28.677044 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:28.677048 | orchestrator | 2026-01-03 00:42:28.677052 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:28.677056 | orchestrator | Saturday 03 January 2026 00:42:23 +0000 (0:00:00.165) 0:00:01.457 ****** 2026-01-03 00:42:28.677060 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:28.677063 | orchestrator | 2026-01-03 00:42:28.677067 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:28.677074 | orchestrator | Saturday 03 January 2026 00:42:23 +0000 (0:00:00.194) 0:00:01.652 ****** 2026-01-03 00:42:28.677078 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:28.677082 | orchestrator | 2026-01-03 00:42:28.677085 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:28.677089 | orchestrator | Saturday 03 January 2026 00:42:23 +0000 (0:00:00.171) 0:00:01.823 ****** 2026-01-03 00:42:28.677093 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:28.677097 | orchestrator | 2026-01-03 00:42:28.677101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:28.677105 | orchestrator | Saturday 03 January 2026 00:42:23 +0000 (0:00:00.172) 0:00:01.996 ****** 2026-01-03 00:42:28.677109 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:28.677112 | orchestrator | 2026-01-03 00:42:28.677116 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:28.677120 | orchestrator | Saturday 03 January 2026 00:42:23 +0000 (0:00:00.203) 0:00:02.199 ****** 2026-01-03 00:42:28.677124 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:28.677128 | orchestrator | 2026-01-03 00:42:28.677131 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:28.677135 | orchestrator | Saturday 03 January 2026 00:42:23 +0000 (0:00:00.201) 0:00:02.400 ****** 2026-01-03 00:42:28.677139 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:28.677144 | orchestrator | 2026-01-03 00:42:28.677150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:28.677155 | orchestrator | Saturday 03 January 2026 00:42:24 +0000 (0:00:00.196) 0:00:02.597 ****** 2026-01-03 00:42:28.677162 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36) 2026-01-03 00:42:28.677169 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36) 2026-01-03 00:42:28.677175 | orchestrator | 2026-01-03 00:42:28.677181 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:28.677199 | orchestrator | Saturday 03 January 2026 00:42:24 +0000 (0:00:00.386) 0:00:02.984 ****** 2026-01-03 00:42:28.677205 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_87925512-ad51-4d26-92cc-5f354ec37d18) 2026-01-03 00:42:28.677215 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_87925512-ad51-4d26-92cc-5f354ec37d18) 2026-01-03 00:42:28.677221 | orchestrator | 2026-01-03 00:42:28.677227 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:28.677233 | orchestrator | Saturday 03 January 2026 00:42:25 +0000 (0:00:00.584) 0:00:03.569 ****** 2026-01-03 00:42:28.677238 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5af48d4e-a5d9-4c77-9873-39f930691ccf) 2026-01-03 00:42:28.677244 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5af48d4e-a5d9-4c77-9873-39f930691ccf) 2026-01-03 00:42:28.677250 | orchestrator | 2026-01-03 00:42:28.677255 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:28.677267 | orchestrator | Saturday 03 January 2026 00:42:25 +0000 (0:00:00.606) 0:00:04.176 ****** 2026-01-03 00:42:28.677274 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0b3b8b78-42ce-4774-a4e8-f10424aa2bf1) 2026-01-03 00:42:28.677280 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0b3b8b78-42ce-4774-a4e8-f10424aa2bf1) 2026-01-03 00:42:28.677286 | orchestrator | 2026-01-03 00:42:28.677292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:28.677297 | orchestrator | Saturday 03 January 2026 00:42:26 +0000 (0:00:00.814) 0:00:04.990 ****** 2026-01-03 00:42:28.677303 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-03 00:42:28.677310 | orchestrator | 2026-01-03 00:42:28.677314 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:28.677318 | orchestrator | Saturday 03 January 2026 00:42:26 +0000 (0:00:00.320) 0:00:05.311 ****** 2026-01-03 00:42:28.677322 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-03 00:42:28.677325 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-03 00:42:28.677329 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-03 00:42:28.677333 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-03 00:42:28.677337 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-03 00:42:28.677341 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-03 00:42:28.677344 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-03 00:42:28.677348 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-03 00:42:28.677352 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-03 00:42:28.677356 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-03 00:42:28.677360 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-03 00:42:28.677363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-03 00:42:28.677367 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-03 00:42:28.677371 | orchestrator | 2026-01-03 00:42:28.677376 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:28.677380 | orchestrator | Saturday 03 January 2026 00:42:27 +0000 (0:00:00.370) 0:00:05.681 ****** 2026-01-03 00:42:28.677384 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:28.677389 | orchestrator | 2026-01-03 00:42:28.677393 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:28.677398 | orchestrator | Saturday 03 January 2026 00:42:27 +0000 (0:00:00.202) 0:00:05.884 ****** 2026-01-03 00:42:28.677402 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:28.677406 | orchestrator | 2026-01-03 00:42:28.677411 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:28.677415 | orchestrator | Saturday 03 January 2026 00:42:27 +0000 (0:00:00.198) 0:00:06.083 ****** 2026-01-03 00:42:28.677420 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:28.677424 | orchestrator | 2026-01-03 00:42:28.677429 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:28.677433 | orchestrator | Saturday 03 January 2026 00:42:27 +0000 (0:00:00.191) 0:00:06.274 ****** 2026-01-03 00:42:28.677437 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:28.677442 | orchestrator | 2026-01-03 00:42:28.677447 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:28.677451 | orchestrator | Saturday 03 January 2026 00:42:28 +0000 (0:00:00.216) 0:00:06.491 ****** 2026-01-03 00:42:28.677460 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:28.677465 | orchestrator | 2026-01-03 00:42:28.677469 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:28.677473 | orchestrator | Saturday 03 January 2026 00:42:28 +0000 (0:00:00.205) 0:00:06.697 ****** 2026-01-03 00:42:28.677478 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:28.677482 | orchestrator | 2026-01-03 00:42:28.677487 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:28.677492 | orchestrator | Saturday 03 January 2026 00:42:28 +0000 (0:00:00.207) 0:00:06.905 ****** 2026-01-03 00:42:28.677496 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:28.677501 | orchestrator | 2026-01-03 00:42:28.677508 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:36.249800 | orchestrator | Saturday 03 January 2026 00:42:28 +0000 (0:00:00.198) 0:00:07.103 ****** 2026-01-03 00:42:36.249917 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:36.249934 | orchestrator | 2026-01-03 00:42:36.249947 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:36.249960 | orchestrator | Saturday 03 January 2026 00:42:28 +0000 (0:00:00.222) 0:00:07.325 ****** 2026-01-03 00:42:36.249971 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-03 00:42:36.250002 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-03 00:42:36.250056 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-03 00:42:36.250070 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-03 00:42:36.250081 | orchestrator | 2026-01-03 00:42:36.250093 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:36.250105 | orchestrator | Saturday 03 January 2026 00:42:29 +0000 (0:00:00.985) 0:00:08.311 ****** 2026-01-03 00:42:36.250116 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:36.250128 | orchestrator | 2026-01-03 00:42:36.250139 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:36.250151 | orchestrator | Saturday 03 January 2026 00:42:30 +0000 (0:00:00.194) 0:00:08.505 ****** 2026-01-03 00:42:36.250162 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:36.250173 | orchestrator | 2026-01-03 00:42:36.250184 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:36.250196 | orchestrator | Saturday 03 January 2026 00:42:30 +0000 (0:00:00.199) 0:00:08.705 ****** 2026-01-03 00:42:36.250207 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:36.250218 | orchestrator | 2026-01-03 00:42:36.250229 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:36.250240 | orchestrator | Saturday 03 January 2026 00:42:30 +0000 (0:00:00.206) 0:00:08.912 ****** 2026-01-03 00:42:36.250251 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:36.250263 | orchestrator | 2026-01-03 00:42:36.250274 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-03 00:42:36.250285 | orchestrator | Saturday 03 January 2026 00:42:30 +0000 (0:00:00.203) 0:00:09.115 ****** 2026-01-03 00:42:36.250296 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-01-03 00:42:36.250307 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-01-03 00:42:36.250318 | orchestrator | 2026-01-03 00:42:36.250329 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-03 00:42:36.250340 | orchestrator | Saturday 03 January 2026 00:42:30 +0000 (0:00:00.167) 0:00:09.283 ****** 2026-01-03 00:42:36.250351 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:36.250362 | orchestrator | 2026-01-03 00:42:36.250373 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-03 00:42:36.250384 | orchestrator | Saturday 03 January 2026 00:42:30 +0000 (0:00:00.130) 0:00:09.413 ****** 2026-01-03 00:42:36.250395 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:36.250406 | orchestrator | 2026-01-03 00:42:36.250418 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-03 00:42:36.250453 | orchestrator | Saturday 03 January 2026 00:42:31 +0000 (0:00:00.137) 0:00:09.551 ****** 2026-01-03 00:42:36.250464 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:36.250475 | orchestrator | 2026-01-03 00:42:36.250486 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-03 00:42:36.250497 | orchestrator | Saturday 03 January 2026 00:42:31 +0000 (0:00:00.137) 0:00:09.688 ****** 2026-01-03 00:42:36.250508 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:42:36.250519 | orchestrator | 2026-01-03 00:42:36.250530 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-03 00:42:36.250541 | orchestrator | Saturday 03 January 2026 00:42:31 +0000 (0:00:00.135) 0:00:09.824 ****** 2026-01-03 00:42:36.250553 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '147f94e4-6564-5421-8ac2-dc0697a6d722'}}) 2026-01-03 00:42:36.250564 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '43909478-d18c-58e7-896e-8d0e3e550915'}}) 2026-01-03 00:42:36.250575 | orchestrator | 2026-01-03 00:42:36.250586 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-03 00:42:36.250598 | orchestrator | Saturday 03 January 2026 00:42:31 +0000 (0:00:00.153) 0:00:09.977 ****** 2026-01-03 00:42:36.250610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '147f94e4-6564-5421-8ac2-dc0697a6d722'}})  2026-01-03 00:42:36.250630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '43909478-d18c-58e7-896e-8d0e3e550915'}})  2026-01-03 00:42:36.250641 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:36.250652 | orchestrator | 2026-01-03 00:42:36.250663 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-03 00:42:36.250674 | orchestrator | Saturday 03 January 2026 00:42:31 +0000 (0:00:00.139) 0:00:10.117 ****** 2026-01-03 00:42:36.250685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '147f94e4-6564-5421-8ac2-dc0697a6d722'}})  2026-01-03 00:42:36.250719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '43909478-d18c-58e7-896e-8d0e3e550915'}})  2026-01-03 00:42:36.250730 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:36.250741 | orchestrator | 2026-01-03 00:42:36.250752 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-03 00:42:36.250763 | orchestrator | Saturday 03 January 2026 00:42:32 +0000 (0:00:00.333) 0:00:10.451 ****** 2026-01-03 00:42:36.250774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '147f94e4-6564-5421-8ac2-dc0697a6d722'}})  2026-01-03 00:42:36.250805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '43909478-d18c-58e7-896e-8d0e3e550915'}})  2026-01-03 00:42:36.250817 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:36.250829 | orchestrator | 2026-01-03 00:42:36.250839 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-03 00:42:36.250851 | orchestrator | Saturday 03 January 2026 00:42:32 +0000 (0:00:00.162) 0:00:10.613 ****** 2026-01-03 00:42:36.250861 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:42:36.250873 | orchestrator | 2026-01-03 00:42:36.250884 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-03 00:42:36.250894 | orchestrator | Saturday 03 January 2026 00:42:32 +0000 (0:00:00.178) 0:00:10.792 ****** 2026-01-03 00:42:36.250905 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:42:36.250917 | orchestrator | 2026-01-03 00:42:36.250927 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-03 00:42:36.250938 | orchestrator | Saturday 03 January 2026 00:42:32 +0000 (0:00:00.151) 0:00:10.943 ****** 2026-01-03 00:42:36.250949 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:36.250960 | orchestrator | 2026-01-03 00:42:36.250971 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-03 00:42:36.250982 | orchestrator | Saturday 03 January 2026 00:42:32 +0000 (0:00:00.138) 0:00:11.082 ****** 2026-01-03 00:42:36.251002 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:36.251013 | orchestrator | 2026-01-03 00:42:36.251024 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-03 00:42:36.251035 | orchestrator | Saturday 03 January 2026 00:42:32 +0000 (0:00:00.146) 0:00:11.228 ****** 2026-01-03 00:42:36.251046 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:36.251057 | orchestrator | 2026-01-03 00:42:36.251068 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-03 00:42:36.251079 | orchestrator | Saturday 03 January 2026 00:42:32 +0000 (0:00:00.159) 0:00:11.387 ****** 2026-01-03 00:42:36.251090 | orchestrator | ok: [testbed-node-3] => { 2026-01-03 00:42:36.251101 | orchestrator |  "ceph_osd_devices": { 2026-01-03 00:42:36.251112 | orchestrator |  "sdb": { 2026-01-03 00:42:36.251123 | orchestrator |  "osd_lvm_uuid": "147f94e4-6564-5421-8ac2-dc0697a6d722" 2026-01-03 00:42:36.251134 | orchestrator |  }, 2026-01-03 00:42:36.251145 | orchestrator |  "sdc": { 2026-01-03 00:42:36.251156 | orchestrator |  "osd_lvm_uuid": "43909478-d18c-58e7-896e-8d0e3e550915" 2026-01-03 00:42:36.251167 | orchestrator |  } 2026-01-03 00:42:36.251178 | orchestrator |  } 2026-01-03 00:42:36.251189 | orchestrator | } 2026-01-03 00:42:36.251200 | orchestrator | 2026-01-03 00:42:36.251211 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-03 00:42:36.251228 | orchestrator | Saturday 03 January 2026 00:42:33 +0000 (0:00:00.146) 0:00:11.534 ****** 2026-01-03 00:42:36.251240 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:36.251250 | orchestrator | 2026-01-03 00:42:36.251262 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-03 00:42:36.251272 | orchestrator | Saturday 03 January 2026 00:42:33 +0000 (0:00:00.130) 0:00:11.665 ****** 2026-01-03 00:42:36.251283 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:36.251294 | orchestrator | 2026-01-03 00:42:36.251305 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-03 00:42:36.251316 | orchestrator | Saturday 03 January 2026 00:42:33 +0000 (0:00:00.152) 0:00:11.817 ****** 2026-01-03 00:42:36.251328 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:42:36.251339 | orchestrator | 2026-01-03 00:42:36.251350 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-03 00:42:36.251361 | orchestrator | Saturday 03 January 2026 00:42:33 +0000 (0:00:00.128) 0:00:11.945 ****** 2026-01-03 00:42:36.251372 | orchestrator | changed: [testbed-node-3] => { 2026-01-03 00:42:36.251383 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-03 00:42:36.251394 | orchestrator |  "ceph_osd_devices": { 2026-01-03 00:42:36.251405 | orchestrator |  "sdb": { 2026-01-03 00:42:36.251416 | orchestrator |  "osd_lvm_uuid": "147f94e4-6564-5421-8ac2-dc0697a6d722" 2026-01-03 00:42:36.251427 | orchestrator |  }, 2026-01-03 00:42:36.251438 | orchestrator |  "sdc": { 2026-01-03 00:42:36.251449 | orchestrator |  "osd_lvm_uuid": "43909478-d18c-58e7-896e-8d0e3e550915" 2026-01-03 00:42:36.251460 | orchestrator |  } 2026-01-03 00:42:36.251471 | orchestrator |  }, 2026-01-03 00:42:36.251482 | orchestrator |  "lvm_volumes": [ 2026-01-03 00:42:36.251493 | orchestrator |  { 2026-01-03 00:42:36.251504 | orchestrator |  "data": "osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722", 2026-01-03 00:42:36.251515 | orchestrator |  "data_vg": "ceph-147f94e4-6564-5421-8ac2-dc0697a6d722" 2026-01-03 00:42:36.251526 | orchestrator |  }, 2026-01-03 00:42:36.251537 | orchestrator |  { 2026-01-03 00:42:36.251548 | orchestrator |  "data": "osd-block-43909478-d18c-58e7-896e-8d0e3e550915", 2026-01-03 00:42:36.251559 | orchestrator |  "data_vg": "ceph-43909478-d18c-58e7-896e-8d0e3e550915" 2026-01-03 00:42:36.251570 | orchestrator |  } 2026-01-03 00:42:36.251581 | orchestrator |  ] 2026-01-03 00:42:36.251592 | orchestrator |  } 2026-01-03 00:42:36.251610 | orchestrator | } 2026-01-03 00:42:36.251621 | orchestrator | 2026-01-03 00:42:36.251632 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-03 00:42:36.251643 | orchestrator | Saturday 03 January 2026 00:42:33 +0000 (0:00:00.393) 0:00:12.339 ****** 2026-01-03 00:42:36.251654 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-03 00:42:36.251665 | orchestrator | 2026-01-03 00:42:36.251676 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-03 00:42:36.251687 | orchestrator | 2026-01-03 00:42:36.251739 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-03 00:42:36.251750 | orchestrator | Saturday 03 January 2026 00:42:35 +0000 (0:00:01.845) 0:00:14.184 ****** 2026-01-03 00:42:36.251761 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-03 00:42:36.251772 | orchestrator | 2026-01-03 00:42:36.251783 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-03 00:42:36.251794 | orchestrator | Saturday 03 January 2026 00:42:36 +0000 (0:00:00.259) 0:00:14.444 ****** 2026-01-03 00:42:36.251805 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:42:36.251816 | orchestrator | 2026-01-03 00:42:36.251834 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:44.143738 | orchestrator | Saturday 03 January 2026 00:42:36 +0000 (0:00:00.225) 0:00:14.669 ****** 2026-01-03 00:42:44.143830 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-03 00:42:44.143840 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-03 00:42:44.143847 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-03 00:42:44.143854 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-03 00:42:44.143861 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-03 00:42:44.143867 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-03 00:42:44.143874 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-03 00:42:44.143896 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-03 00:42:44.143904 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-03 00:42:44.143910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-03 00:42:44.143916 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-03 00:42:44.143926 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-03 00:42:44.143933 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-03 00:42:44.143939 | orchestrator | 2026-01-03 00:42:44.143947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:44.143953 | orchestrator | Saturday 03 January 2026 00:42:36 +0000 (0:00:00.379) 0:00:15.049 ****** 2026-01-03 00:42:44.143960 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:44.143967 | orchestrator | 2026-01-03 00:42:44.143974 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:44.143980 | orchestrator | Saturday 03 January 2026 00:42:36 +0000 (0:00:00.237) 0:00:15.287 ****** 2026-01-03 00:42:44.143987 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:44.143993 | orchestrator | 2026-01-03 00:42:44.143999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:44.144008 | orchestrator | Saturday 03 January 2026 00:42:37 +0000 (0:00:00.214) 0:00:15.501 ****** 2026-01-03 00:42:44.144017 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:44.144027 | orchestrator | 2026-01-03 00:42:44.144037 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:44.144069 | orchestrator | Saturday 03 January 2026 00:42:37 +0000 (0:00:00.192) 0:00:15.693 ****** 2026-01-03 00:42:44.144079 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:44.144089 | orchestrator | 2026-01-03 00:42:44.144099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:44.144109 | orchestrator | Saturday 03 January 2026 00:42:37 +0000 (0:00:00.197) 0:00:15.891 ****** 2026-01-03 00:42:44.144118 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:44.144128 | orchestrator | 2026-01-03 00:42:44.144138 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:44.144147 | orchestrator | Saturday 03 January 2026 00:42:38 +0000 (0:00:00.566) 0:00:16.458 ****** 2026-01-03 00:42:44.144157 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:44.144167 | orchestrator | 2026-01-03 00:42:44.144177 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:44.144186 | orchestrator | Saturday 03 January 2026 00:42:38 +0000 (0:00:00.194) 0:00:16.653 ****** 2026-01-03 00:42:44.144196 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:44.144206 | orchestrator | 2026-01-03 00:42:44.144215 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:44.144225 | orchestrator | Saturday 03 January 2026 00:42:38 +0000 (0:00:00.185) 0:00:16.839 ****** 2026-01-03 00:42:44.144235 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:44.144245 | orchestrator | 2026-01-03 00:42:44.144255 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:44.144267 | orchestrator | Saturday 03 January 2026 00:42:38 +0000 (0:00:00.188) 0:00:17.027 ****** 2026-01-03 00:42:44.144278 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f) 2026-01-03 00:42:44.144291 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f) 2026-01-03 00:42:44.144302 | orchestrator | 2026-01-03 00:42:44.144313 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:44.144324 | orchestrator | Saturday 03 January 2026 00:42:39 +0000 (0:00:00.413) 0:00:17.441 ****** 2026-01-03 00:42:44.144336 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_38f41ea3-c1f8-47b6-a316-62c713a7ab6d) 2026-01-03 00:42:44.144347 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_38f41ea3-c1f8-47b6-a316-62c713a7ab6d) 2026-01-03 00:42:44.144359 | orchestrator | 2026-01-03 00:42:44.144371 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:44.144382 | orchestrator | Saturday 03 January 2026 00:42:39 +0000 (0:00:00.432) 0:00:17.873 ****** 2026-01-03 00:42:44.144393 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_65a14f46-a6f7-4da8-aafc-46a47f969b79) 2026-01-03 00:42:44.144404 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_65a14f46-a6f7-4da8-aafc-46a47f969b79) 2026-01-03 00:42:44.144416 | orchestrator | 2026-01-03 00:42:44.144427 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:44.144453 | orchestrator | Saturday 03 January 2026 00:42:39 +0000 (0:00:00.422) 0:00:18.295 ****** 2026-01-03 00:42:44.144464 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e5f15048-4e23-4094-a7eb-216bc02a3879) 2026-01-03 00:42:44.144474 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e5f15048-4e23-4094-a7eb-216bc02a3879) 2026-01-03 00:42:44.144484 | orchestrator | 2026-01-03 00:42:44.144500 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:44.144510 | orchestrator | Saturday 03 January 2026 00:42:40 +0000 (0:00:00.449) 0:00:18.744 ****** 2026-01-03 00:42:44.144520 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-03 00:42:44.144530 | orchestrator | 2026-01-03 00:42:44.144540 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:44.144549 | orchestrator | Saturday 03 January 2026 00:42:40 +0000 (0:00:00.432) 0:00:19.177 ****** 2026-01-03 00:42:44.144567 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-03 00:42:44.144577 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-03 00:42:44.144587 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-03 00:42:44.144596 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-03 00:42:44.144606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-03 00:42:44.144616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-03 00:42:44.144625 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-03 00:42:44.144635 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-03 00:42:44.144645 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-03 00:42:44.144654 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-03 00:42:44.144664 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-03 00:42:44.144674 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-03 00:42:44.144718 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-03 00:42:44.144730 | orchestrator | 2026-01-03 00:42:44.144740 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:44.144750 | orchestrator | Saturday 03 January 2026 00:42:41 +0000 (0:00:00.390) 0:00:19.567 ****** 2026-01-03 00:42:44.144759 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:44.144769 | orchestrator | 2026-01-03 00:42:44.144778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:44.144788 | orchestrator | Saturday 03 January 2026 00:42:41 +0000 (0:00:00.680) 0:00:20.248 ****** 2026-01-03 00:42:44.144797 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:44.144807 | orchestrator | 2026-01-03 00:42:44.144817 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:44.144826 | orchestrator | Saturday 03 January 2026 00:42:42 +0000 (0:00:00.268) 0:00:20.517 ****** 2026-01-03 00:42:44.144836 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:44.144846 | orchestrator | 2026-01-03 00:42:44.144856 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:44.144872 | orchestrator | Saturday 03 January 2026 00:42:42 +0000 (0:00:00.322) 0:00:20.840 ****** 2026-01-03 00:42:44.144886 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:44.144901 | orchestrator | 2026-01-03 00:42:44.144916 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:44.144931 | orchestrator | Saturday 03 January 2026 00:42:42 +0000 (0:00:00.178) 0:00:21.018 ****** 2026-01-03 00:42:44.144945 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:44.144961 | orchestrator | 2026-01-03 00:42:44.144976 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:44.144991 | orchestrator | Saturday 03 January 2026 00:42:42 +0000 (0:00:00.165) 0:00:21.184 ****** 2026-01-03 00:42:44.145005 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:44.145020 | orchestrator | 2026-01-03 00:42:44.145036 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:44.145052 | orchestrator | Saturday 03 January 2026 00:42:42 +0000 (0:00:00.150) 0:00:21.335 ****** 2026-01-03 00:42:44.145067 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:44.145082 | orchestrator | 2026-01-03 00:42:44.145097 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:44.145112 | orchestrator | Saturday 03 January 2026 00:42:43 +0000 (0:00:00.181) 0:00:21.517 ****** 2026-01-03 00:42:44.145139 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:44.145156 | orchestrator | 2026-01-03 00:42:44.145173 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:44.145189 | orchestrator | Saturday 03 January 2026 00:42:43 +0000 (0:00:00.185) 0:00:21.702 ****** 2026-01-03 00:42:44.145206 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-03 00:42:44.145223 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-03 00:42:44.145236 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-03 00:42:44.145246 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-03 00:42:44.145256 | orchestrator | 2026-01-03 00:42:44.145266 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:44.145275 | orchestrator | Saturday 03 January 2026 00:42:43 +0000 (0:00:00.722) 0:00:22.424 ****** 2026-01-03 00:42:44.145285 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:49.213818 | orchestrator | 2026-01-03 00:42:49.213921 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:49.213935 | orchestrator | Saturday 03 January 2026 00:42:44 +0000 (0:00:00.152) 0:00:22.576 ****** 2026-01-03 00:42:49.213945 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:49.213955 | orchestrator | 2026-01-03 00:42:49.213963 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:49.213990 | orchestrator | Saturday 03 January 2026 00:42:44 +0000 (0:00:00.151) 0:00:22.728 ****** 2026-01-03 00:42:49.214000 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:49.214008 | orchestrator | 2026-01-03 00:42:49.214057 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:49.214067 | orchestrator | Saturday 03 January 2026 00:42:44 +0000 (0:00:00.161) 0:00:22.890 ****** 2026-01-03 00:42:49.214076 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:49.214084 | orchestrator | 2026-01-03 00:42:49.214093 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-03 00:42:49.214102 | orchestrator | Saturday 03 January 2026 00:42:44 +0000 (0:00:00.477) 0:00:23.367 ****** 2026-01-03 00:42:49.214111 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-01-03 00:42:49.214121 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-01-03 00:42:49.214130 | orchestrator | 2026-01-03 00:42:49.214138 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-03 00:42:49.214148 | orchestrator | Saturday 03 January 2026 00:42:45 +0000 (0:00:00.156) 0:00:23.524 ****** 2026-01-03 00:42:49.214157 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:49.214167 | orchestrator | 2026-01-03 00:42:49.214174 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-03 00:42:49.214180 | orchestrator | Saturday 03 January 2026 00:42:45 +0000 (0:00:00.124) 0:00:23.649 ****** 2026-01-03 00:42:49.214186 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:49.214191 | orchestrator | 2026-01-03 00:42:49.214197 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-03 00:42:49.214203 | orchestrator | Saturday 03 January 2026 00:42:45 +0000 (0:00:00.120) 0:00:23.770 ****** 2026-01-03 00:42:49.214209 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:49.214214 | orchestrator | 2026-01-03 00:42:49.214220 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-03 00:42:49.214226 | orchestrator | Saturday 03 January 2026 00:42:45 +0000 (0:00:00.116) 0:00:23.887 ****** 2026-01-03 00:42:49.214231 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:42:49.214238 | orchestrator | 2026-01-03 00:42:49.214243 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-03 00:42:49.214249 | orchestrator | Saturday 03 January 2026 00:42:45 +0000 (0:00:00.094) 0:00:23.981 ****** 2026-01-03 00:42:49.214255 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f97db499-9f50-5724-b4de-324784fab4ab'}}) 2026-01-03 00:42:49.214261 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '293f14c0-405b-5b3a-a5c8-f3b182003048'}}) 2026-01-03 00:42:49.214285 | orchestrator | 2026-01-03 00:42:49.214293 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-03 00:42:49.214301 | orchestrator | Saturday 03 January 2026 00:42:45 +0000 (0:00:00.118) 0:00:24.100 ****** 2026-01-03 00:42:49.214311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f97db499-9f50-5724-b4de-324784fab4ab'}})  2026-01-03 00:42:49.214321 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '293f14c0-405b-5b3a-a5c8-f3b182003048'}})  2026-01-03 00:42:49.214330 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:49.214339 | orchestrator | 2026-01-03 00:42:49.214347 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-03 00:42:49.214352 | orchestrator | Saturday 03 January 2026 00:42:45 +0000 (0:00:00.100) 0:00:24.201 ****** 2026-01-03 00:42:49.214358 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f97db499-9f50-5724-b4de-324784fab4ab'}})  2026-01-03 00:42:49.214363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '293f14c0-405b-5b3a-a5c8-f3b182003048'}})  2026-01-03 00:42:49.214369 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:49.214374 | orchestrator | 2026-01-03 00:42:49.214380 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-03 00:42:49.214385 | orchestrator | Saturday 03 January 2026 00:42:45 +0000 (0:00:00.119) 0:00:24.321 ****** 2026-01-03 00:42:49.214391 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f97db499-9f50-5724-b4de-324784fab4ab'}})  2026-01-03 00:42:49.214396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '293f14c0-405b-5b3a-a5c8-f3b182003048'}})  2026-01-03 00:42:49.214402 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:49.214407 | orchestrator | 2026-01-03 00:42:49.214413 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-03 00:42:49.214418 | orchestrator | Saturday 03 January 2026 00:42:45 +0000 (0:00:00.108) 0:00:24.429 ****** 2026-01-03 00:42:49.214423 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:42:49.214429 | orchestrator | 2026-01-03 00:42:49.214434 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-03 00:42:49.214439 | orchestrator | Saturday 03 January 2026 00:42:46 +0000 (0:00:00.097) 0:00:24.526 ****** 2026-01-03 00:42:49.214445 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:42:49.214450 | orchestrator | 2026-01-03 00:42:49.214456 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-03 00:42:49.214461 | orchestrator | Saturday 03 January 2026 00:42:46 +0000 (0:00:00.100) 0:00:24.627 ****** 2026-01-03 00:42:49.214482 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:49.214488 | orchestrator | 2026-01-03 00:42:49.214493 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-03 00:42:49.214499 | orchestrator | Saturday 03 January 2026 00:42:46 +0000 (0:00:00.232) 0:00:24.859 ****** 2026-01-03 00:42:49.214504 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:49.214510 | orchestrator | 2026-01-03 00:42:49.214515 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-03 00:42:49.214520 | orchestrator | Saturday 03 January 2026 00:42:46 +0000 (0:00:00.103) 0:00:24.963 ****** 2026-01-03 00:42:49.214526 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:49.214531 | orchestrator | 2026-01-03 00:42:49.214537 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-03 00:42:49.214542 | orchestrator | Saturday 03 January 2026 00:42:46 +0000 (0:00:00.112) 0:00:25.075 ****** 2026-01-03 00:42:49.214548 | orchestrator | ok: [testbed-node-4] => { 2026-01-03 00:42:49.214553 | orchestrator |  "ceph_osd_devices": { 2026-01-03 00:42:49.214559 | orchestrator |  "sdb": { 2026-01-03 00:42:49.214564 | orchestrator |  "osd_lvm_uuid": "f97db499-9f50-5724-b4de-324784fab4ab" 2026-01-03 00:42:49.214576 | orchestrator |  }, 2026-01-03 00:42:49.214581 | orchestrator |  "sdc": { 2026-01-03 00:42:49.214592 | orchestrator |  "osd_lvm_uuid": "293f14c0-405b-5b3a-a5c8-f3b182003048" 2026-01-03 00:42:49.214598 | orchestrator |  } 2026-01-03 00:42:49.214603 | orchestrator |  } 2026-01-03 00:42:49.214609 | orchestrator | } 2026-01-03 00:42:49.214615 | orchestrator | 2026-01-03 00:42:49.214621 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-03 00:42:49.214626 | orchestrator | Saturday 03 January 2026 00:42:46 +0000 (0:00:00.140) 0:00:25.216 ****** 2026-01-03 00:42:49.214632 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:49.214637 | orchestrator | 2026-01-03 00:42:49.214642 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-03 00:42:49.214648 | orchestrator | Saturday 03 January 2026 00:42:46 +0000 (0:00:00.117) 0:00:25.333 ****** 2026-01-03 00:42:49.214653 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:49.214659 | orchestrator | 2026-01-03 00:42:49.214664 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-03 00:42:49.214670 | orchestrator | Saturday 03 January 2026 00:42:47 +0000 (0:00:00.122) 0:00:25.457 ****** 2026-01-03 00:42:49.214675 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:42:49.214721 | orchestrator | 2026-01-03 00:42:49.214728 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-03 00:42:49.214733 | orchestrator | Saturday 03 January 2026 00:42:47 +0000 (0:00:00.121) 0:00:25.579 ****** 2026-01-03 00:42:49.214738 | orchestrator | changed: [testbed-node-4] => { 2026-01-03 00:42:49.214744 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-03 00:42:49.214750 | orchestrator |  "ceph_osd_devices": { 2026-01-03 00:42:49.214755 | orchestrator |  "sdb": { 2026-01-03 00:42:49.214764 | orchestrator |  "osd_lvm_uuid": "f97db499-9f50-5724-b4de-324784fab4ab" 2026-01-03 00:42:49.214770 | orchestrator |  }, 2026-01-03 00:42:49.214775 | orchestrator |  "sdc": { 2026-01-03 00:42:49.214780 | orchestrator |  "osd_lvm_uuid": "293f14c0-405b-5b3a-a5c8-f3b182003048" 2026-01-03 00:42:49.214786 | orchestrator |  } 2026-01-03 00:42:49.214791 | orchestrator |  }, 2026-01-03 00:42:49.214797 | orchestrator |  "lvm_volumes": [ 2026-01-03 00:42:49.214802 | orchestrator |  { 2026-01-03 00:42:49.214807 | orchestrator |  "data": "osd-block-f97db499-9f50-5724-b4de-324784fab4ab", 2026-01-03 00:42:49.214813 | orchestrator |  "data_vg": "ceph-f97db499-9f50-5724-b4de-324784fab4ab" 2026-01-03 00:42:49.214818 | orchestrator |  }, 2026-01-03 00:42:49.214824 | orchestrator |  { 2026-01-03 00:42:49.214829 | orchestrator |  "data": "osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048", 2026-01-03 00:42:49.214834 | orchestrator |  "data_vg": "ceph-293f14c0-405b-5b3a-a5c8-f3b182003048" 2026-01-03 00:42:49.214840 | orchestrator |  } 2026-01-03 00:42:49.214845 | orchestrator |  ] 2026-01-03 00:42:49.214851 | orchestrator |  } 2026-01-03 00:42:49.214856 | orchestrator | } 2026-01-03 00:42:49.214862 | orchestrator | 2026-01-03 00:42:49.214867 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-03 00:42:49.214873 | orchestrator | Saturday 03 January 2026 00:42:47 +0000 (0:00:00.201) 0:00:25.780 ****** 2026-01-03 00:42:49.214878 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-03 00:42:49.214884 | orchestrator | 2026-01-03 00:42:49.214889 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-03 00:42:49.214894 | orchestrator | 2026-01-03 00:42:49.214900 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-03 00:42:49.214905 | orchestrator | Saturday 03 January 2026 00:42:48 +0000 (0:00:00.857) 0:00:26.638 ****** 2026-01-03 00:42:49.214911 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-03 00:42:49.214916 | orchestrator | 2026-01-03 00:42:49.214921 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-03 00:42:49.214931 | orchestrator | Saturday 03 January 2026 00:42:48 +0000 (0:00:00.491) 0:00:27.129 ****** 2026-01-03 00:42:49.214937 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:42:49.214942 | orchestrator | 2026-01-03 00:42:49.214947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:49.214953 | orchestrator | Saturday 03 January 2026 00:42:48 +0000 (0:00:00.207) 0:00:27.337 ****** 2026-01-03 00:42:49.214958 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-03 00:42:49.214964 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-03 00:42:49.214969 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-03 00:42:49.214975 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-03 00:42:49.214980 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-03 00:42:49.214989 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-03 00:42:56.789704 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-03 00:42:56.789779 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-03 00:42:56.789785 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-03 00:42:56.789790 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-03 00:42:56.789794 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-03 00:42:56.789798 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-03 00:42:56.789802 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-03 00:42:56.789807 | orchestrator | 2026-01-03 00:42:56.789811 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:56.789816 | orchestrator | Saturday 03 January 2026 00:42:49 +0000 (0:00:00.294) 0:00:27.632 ****** 2026-01-03 00:42:56.789820 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:56.789826 | orchestrator | 2026-01-03 00:42:56.789830 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:56.789834 | orchestrator | Saturday 03 January 2026 00:42:49 +0000 (0:00:00.188) 0:00:27.820 ****** 2026-01-03 00:42:56.789838 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:56.789842 | orchestrator | 2026-01-03 00:42:56.789846 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:56.789850 | orchestrator | Saturday 03 January 2026 00:42:49 +0000 (0:00:00.183) 0:00:28.004 ****** 2026-01-03 00:42:56.789854 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:56.789858 | orchestrator | 2026-01-03 00:42:56.789862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:56.789866 | orchestrator | Saturday 03 January 2026 00:42:49 +0000 (0:00:00.174) 0:00:28.178 ****** 2026-01-03 00:42:56.789870 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:56.789874 | orchestrator | 2026-01-03 00:42:56.789878 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:56.789882 | orchestrator | Saturday 03 January 2026 00:42:49 +0000 (0:00:00.158) 0:00:28.337 ****** 2026-01-03 00:42:56.789894 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:56.789898 | orchestrator | 2026-01-03 00:42:56.789902 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:56.789906 | orchestrator | Saturday 03 January 2026 00:42:50 +0000 (0:00:00.185) 0:00:28.523 ****** 2026-01-03 00:42:56.789910 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:56.789914 | orchestrator | 2026-01-03 00:42:56.789932 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:56.789952 | orchestrator | Saturday 03 January 2026 00:42:50 +0000 (0:00:00.184) 0:00:28.707 ****** 2026-01-03 00:42:56.789957 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:56.789961 | orchestrator | 2026-01-03 00:42:56.789965 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:56.789969 | orchestrator | Saturday 03 January 2026 00:42:50 +0000 (0:00:00.232) 0:00:28.940 ****** 2026-01-03 00:42:56.789973 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:56.789977 | orchestrator | 2026-01-03 00:42:56.789982 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:56.789986 | orchestrator | Saturday 03 January 2026 00:42:50 +0000 (0:00:00.217) 0:00:29.157 ****** 2026-01-03 00:42:56.789990 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca) 2026-01-03 00:42:56.789995 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca) 2026-01-03 00:42:56.789999 | orchestrator | 2026-01-03 00:42:56.790003 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:56.790007 | orchestrator | Saturday 03 January 2026 00:42:51 +0000 (0:00:00.663) 0:00:29.821 ****** 2026-01-03 00:42:56.790011 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5710b3c7-cfac-4b3c-9149-eeb74f32a79f) 2026-01-03 00:42:56.790059 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5710b3c7-cfac-4b3c-9149-eeb74f32a79f) 2026-01-03 00:42:56.790066 | orchestrator | 2026-01-03 00:42:56.790073 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:56.790079 | orchestrator | Saturday 03 January 2026 00:42:51 +0000 (0:00:00.386) 0:00:30.207 ****** 2026-01-03 00:42:56.790083 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d4e0be62-f642-4458-ac3b-093009378a3c) 2026-01-03 00:42:56.790087 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d4e0be62-f642-4458-ac3b-093009378a3c) 2026-01-03 00:42:56.790090 | orchestrator | 2026-01-03 00:42:56.790094 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:56.790098 | orchestrator | Saturday 03 January 2026 00:42:52 +0000 (0:00:00.452) 0:00:30.659 ****** 2026-01-03 00:42:56.790102 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_55b36885-901f-4155-8165-44f8903e4943) 2026-01-03 00:42:56.790106 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_55b36885-901f-4155-8165-44f8903e4943) 2026-01-03 00:42:56.790110 | orchestrator | 2026-01-03 00:42:56.790114 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:42:56.790118 | orchestrator | Saturday 03 January 2026 00:42:52 +0000 (0:00:00.453) 0:00:31.113 ****** 2026-01-03 00:42:56.790121 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-03 00:42:56.790125 | orchestrator | 2026-01-03 00:42:56.790129 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:56.790144 | orchestrator | Saturday 03 January 2026 00:42:53 +0000 (0:00:00.336) 0:00:31.449 ****** 2026-01-03 00:42:56.790149 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-03 00:42:56.790153 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-03 00:42:56.790157 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-03 00:42:56.790161 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-03 00:42:56.790165 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-03 00:42:56.790169 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-03 00:42:56.790173 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-03 00:42:56.790177 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-03 00:42:56.790186 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-03 00:42:56.790190 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-03 00:42:56.790194 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-03 00:42:56.790198 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-03 00:42:56.790202 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-03 00:42:56.790206 | orchestrator | 2026-01-03 00:42:56.790210 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:56.790214 | orchestrator | Saturday 03 January 2026 00:42:53 +0000 (0:00:00.425) 0:00:31.875 ****** 2026-01-03 00:42:56.790217 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:56.790221 | orchestrator | 2026-01-03 00:42:56.790225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:56.790229 | orchestrator | Saturday 03 January 2026 00:42:53 +0000 (0:00:00.191) 0:00:32.066 ****** 2026-01-03 00:42:56.790234 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:56.790238 | orchestrator | 2026-01-03 00:42:56.790243 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:56.790248 | orchestrator | Saturday 03 January 2026 00:42:53 +0000 (0:00:00.233) 0:00:32.299 ****** 2026-01-03 00:42:56.790255 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:56.790262 | orchestrator | 2026-01-03 00:42:56.790268 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:56.790274 | orchestrator | Saturday 03 January 2026 00:42:54 +0000 (0:00:00.228) 0:00:32.528 ****** 2026-01-03 00:42:56.790281 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:56.790287 | orchestrator | 2026-01-03 00:42:56.790297 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:56.790303 | orchestrator | Saturday 03 January 2026 00:42:54 +0000 (0:00:00.188) 0:00:32.716 ****** 2026-01-03 00:42:56.790309 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:56.790315 | orchestrator | 2026-01-03 00:42:56.790323 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:56.790332 | orchestrator | Saturday 03 January 2026 00:42:54 +0000 (0:00:00.170) 0:00:32.887 ****** 2026-01-03 00:42:56.790341 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:56.790348 | orchestrator | 2026-01-03 00:42:56.790354 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:56.790361 | orchestrator | Saturday 03 January 2026 00:42:55 +0000 (0:00:00.679) 0:00:33.566 ****** 2026-01-03 00:42:56.790367 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:56.790374 | orchestrator | 2026-01-03 00:42:56.790381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:56.790387 | orchestrator | Saturday 03 January 2026 00:42:55 +0000 (0:00:00.274) 0:00:33.840 ****** 2026-01-03 00:42:56.790394 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:56.790400 | orchestrator | 2026-01-03 00:42:56.790407 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:56.790414 | orchestrator | Saturday 03 January 2026 00:42:55 +0000 (0:00:00.196) 0:00:34.037 ****** 2026-01-03 00:42:56.790421 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-03 00:42:56.790429 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-03 00:42:56.790434 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-03 00:42:56.790438 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-03 00:42:56.790442 | orchestrator | 2026-01-03 00:42:56.790446 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:56.790450 | orchestrator | Saturday 03 January 2026 00:42:56 +0000 (0:00:00.555) 0:00:34.593 ****** 2026-01-03 00:42:56.790454 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:56.790462 | orchestrator | 2026-01-03 00:42:56.790466 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:56.790475 | orchestrator | Saturday 03 January 2026 00:42:56 +0000 (0:00:00.165) 0:00:34.758 ****** 2026-01-03 00:42:56.790479 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:56.790483 | orchestrator | 2026-01-03 00:42:56.790487 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:56.790491 | orchestrator | Saturday 03 January 2026 00:42:56 +0000 (0:00:00.177) 0:00:34.935 ****** 2026-01-03 00:42:56.790495 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:56.790498 | orchestrator | 2026-01-03 00:42:56.790502 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:42:56.790506 | orchestrator | Saturday 03 January 2026 00:42:56 +0000 (0:00:00.147) 0:00:35.083 ****** 2026-01-03 00:42:56.790510 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:56.790514 | orchestrator | 2026-01-03 00:42:56.790523 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-03 00:42:59.852177 | orchestrator | Saturday 03 January 2026 00:42:56 +0000 (0:00:00.138) 0:00:35.222 ****** 2026-01-03 00:42:59.852283 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-01-03 00:42:59.852298 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-01-03 00:42:59.852310 | orchestrator | 2026-01-03 00:42:59.852323 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-03 00:42:59.852334 | orchestrator | Saturday 03 January 2026 00:42:56 +0000 (0:00:00.118) 0:00:35.340 ****** 2026-01-03 00:42:59.852346 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:59.852357 | orchestrator | 2026-01-03 00:42:59.852368 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-03 00:42:59.852379 | orchestrator | Saturday 03 January 2026 00:42:56 +0000 (0:00:00.089) 0:00:35.430 ****** 2026-01-03 00:42:59.852390 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:59.852401 | orchestrator | 2026-01-03 00:42:59.852412 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-03 00:42:59.852429 | orchestrator | Saturday 03 January 2026 00:42:57 +0000 (0:00:00.090) 0:00:35.520 ****** 2026-01-03 00:42:59.852447 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:59.852465 | orchestrator | 2026-01-03 00:42:59.852484 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-03 00:42:59.852502 | orchestrator | Saturday 03 January 2026 00:42:57 +0000 (0:00:00.210) 0:00:35.731 ****** 2026-01-03 00:42:59.852520 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:42:59.852535 | orchestrator | 2026-01-03 00:42:59.852547 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-03 00:42:59.852559 | orchestrator | Saturday 03 January 2026 00:42:57 +0000 (0:00:00.093) 0:00:35.825 ****** 2026-01-03 00:42:59.852570 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '124077fc-a709-5275-a3b4-8defea20aa20'}}) 2026-01-03 00:42:59.852582 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '43153f84-c643-5017-9328-2bdcf330b780'}}) 2026-01-03 00:42:59.852593 | orchestrator | 2026-01-03 00:42:59.852604 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-03 00:42:59.852615 | orchestrator | Saturday 03 January 2026 00:42:57 +0000 (0:00:00.126) 0:00:35.952 ****** 2026-01-03 00:42:59.852626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '124077fc-a709-5275-a3b4-8defea20aa20'}})  2026-01-03 00:42:59.852656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '43153f84-c643-5017-9328-2bdcf330b780'}})  2026-01-03 00:42:59.852668 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:59.852746 | orchestrator | 2026-01-03 00:42:59.852761 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-03 00:42:59.852775 | orchestrator | Saturday 03 January 2026 00:42:57 +0000 (0:00:00.139) 0:00:36.091 ****** 2026-01-03 00:42:59.852826 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '124077fc-a709-5275-a3b4-8defea20aa20'}})  2026-01-03 00:42:59.852848 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '43153f84-c643-5017-9328-2bdcf330b780'}})  2026-01-03 00:42:59.852867 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:59.852886 | orchestrator | 2026-01-03 00:42:59.852906 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-03 00:42:59.852919 | orchestrator | Saturday 03 January 2026 00:42:57 +0000 (0:00:00.125) 0:00:36.216 ****** 2026-01-03 00:42:59.852931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '124077fc-a709-5275-a3b4-8defea20aa20'}})  2026-01-03 00:42:59.852944 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '43153f84-c643-5017-9328-2bdcf330b780'}})  2026-01-03 00:42:59.852957 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:59.852970 | orchestrator | 2026-01-03 00:42:59.852983 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-03 00:42:59.852995 | orchestrator | Saturday 03 January 2026 00:42:57 +0000 (0:00:00.110) 0:00:36.326 ****** 2026-01-03 00:42:59.853007 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:42:59.853019 | orchestrator | 2026-01-03 00:42:59.853032 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-03 00:42:59.853045 | orchestrator | Saturday 03 January 2026 00:42:57 +0000 (0:00:00.101) 0:00:36.427 ****** 2026-01-03 00:42:59.853058 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:42:59.853070 | orchestrator | 2026-01-03 00:42:59.853081 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-03 00:42:59.853092 | orchestrator | Saturday 03 January 2026 00:42:58 +0000 (0:00:00.113) 0:00:36.541 ****** 2026-01-03 00:42:59.853103 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:59.853114 | orchestrator | 2026-01-03 00:42:59.853125 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-03 00:42:59.853136 | orchestrator | Saturday 03 January 2026 00:42:58 +0000 (0:00:00.099) 0:00:36.641 ****** 2026-01-03 00:42:59.853147 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:59.853157 | orchestrator | 2026-01-03 00:42:59.853185 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-03 00:42:59.853208 | orchestrator | Saturday 03 January 2026 00:42:58 +0000 (0:00:00.102) 0:00:36.743 ****** 2026-01-03 00:42:59.853220 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:59.853230 | orchestrator | 2026-01-03 00:42:59.853241 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-03 00:42:59.853252 | orchestrator | Saturday 03 January 2026 00:42:58 +0000 (0:00:00.097) 0:00:36.841 ****** 2026-01-03 00:42:59.853263 | orchestrator | ok: [testbed-node-5] => { 2026-01-03 00:42:59.853274 | orchestrator |  "ceph_osd_devices": { 2026-01-03 00:42:59.853285 | orchestrator |  "sdb": { 2026-01-03 00:42:59.853314 | orchestrator |  "osd_lvm_uuid": "124077fc-a709-5275-a3b4-8defea20aa20" 2026-01-03 00:42:59.853327 | orchestrator |  }, 2026-01-03 00:42:59.853338 | orchestrator |  "sdc": { 2026-01-03 00:42:59.853349 | orchestrator |  "osd_lvm_uuid": "43153f84-c643-5017-9328-2bdcf330b780" 2026-01-03 00:42:59.853361 | orchestrator |  } 2026-01-03 00:42:59.853371 | orchestrator |  } 2026-01-03 00:42:59.853382 | orchestrator | } 2026-01-03 00:42:59.853394 | orchestrator | 2026-01-03 00:42:59.853405 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-03 00:42:59.853416 | orchestrator | Saturday 03 January 2026 00:42:58 +0000 (0:00:00.094) 0:00:36.935 ****** 2026-01-03 00:42:59.853427 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:59.853438 | orchestrator | 2026-01-03 00:42:59.853449 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-03 00:42:59.853459 | orchestrator | Saturday 03 January 2026 00:42:58 +0000 (0:00:00.224) 0:00:37.159 ****** 2026-01-03 00:42:59.853482 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:59.853501 | orchestrator | 2026-01-03 00:42:59.853519 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-03 00:42:59.853530 | orchestrator | Saturday 03 January 2026 00:42:58 +0000 (0:00:00.088) 0:00:37.247 ****** 2026-01-03 00:42:59.853541 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:42:59.853552 | orchestrator | 2026-01-03 00:42:59.853563 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-03 00:42:59.853574 | orchestrator | Saturday 03 January 2026 00:42:58 +0000 (0:00:00.103) 0:00:37.351 ****** 2026-01-03 00:42:59.853585 | orchestrator | changed: [testbed-node-5] => { 2026-01-03 00:42:59.853596 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-03 00:42:59.853607 | orchestrator |  "ceph_osd_devices": { 2026-01-03 00:42:59.853618 | orchestrator |  "sdb": { 2026-01-03 00:42:59.853629 | orchestrator |  "osd_lvm_uuid": "124077fc-a709-5275-a3b4-8defea20aa20" 2026-01-03 00:42:59.853640 | orchestrator |  }, 2026-01-03 00:42:59.853652 | orchestrator |  "sdc": { 2026-01-03 00:42:59.853663 | orchestrator |  "osd_lvm_uuid": "43153f84-c643-5017-9328-2bdcf330b780" 2026-01-03 00:42:59.853708 | orchestrator |  } 2026-01-03 00:42:59.853722 | orchestrator |  }, 2026-01-03 00:42:59.853732 | orchestrator |  "lvm_volumes": [ 2026-01-03 00:42:59.853743 | orchestrator |  { 2026-01-03 00:42:59.853754 | orchestrator |  "data": "osd-block-124077fc-a709-5275-a3b4-8defea20aa20", 2026-01-03 00:42:59.853765 | orchestrator |  "data_vg": "ceph-124077fc-a709-5275-a3b4-8defea20aa20" 2026-01-03 00:42:59.853776 | orchestrator |  }, 2026-01-03 00:42:59.853787 | orchestrator |  { 2026-01-03 00:42:59.853798 | orchestrator |  "data": "osd-block-43153f84-c643-5017-9328-2bdcf330b780", 2026-01-03 00:42:59.853819 | orchestrator |  "data_vg": "ceph-43153f84-c643-5017-9328-2bdcf330b780" 2026-01-03 00:42:59.853830 | orchestrator |  } 2026-01-03 00:42:59.853846 | orchestrator |  ] 2026-01-03 00:42:59.853858 | orchestrator |  } 2026-01-03 00:42:59.853869 | orchestrator | } 2026-01-03 00:42:59.853880 | orchestrator | 2026-01-03 00:42:59.853891 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-03 00:42:59.853902 | orchestrator | Saturday 03 January 2026 00:42:59 +0000 (0:00:00.184) 0:00:37.536 ****** 2026-01-03 00:42:59.853913 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-03 00:42:59.853924 | orchestrator | 2026-01-03 00:42:59.853934 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:42:59.853946 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-03 00:42:59.853958 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-03 00:42:59.853969 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-03 00:42:59.853980 | orchestrator | 2026-01-03 00:42:59.853991 | orchestrator | 2026-01-03 00:42:59.854002 | orchestrator | 2026-01-03 00:42:59.854080 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:42:59.854094 | orchestrator | Saturday 03 January 2026 00:42:59 +0000 (0:00:00.737) 0:00:38.273 ****** 2026-01-03 00:42:59.854105 | orchestrator | =============================================================================== 2026-01-03 00:42:59.854116 | orchestrator | Write configuration file ------------------------------------------------ 3.44s 2026-01-03 00:42:59.854127 | orchestrator | Add known partitions to the list of available block devices ------------- 1.19s 2026-01-03 00:42:59.854140 | orchestrator | Add known links to the list of available block devices ------------------ 1.07s 2026-01-03 00:42:59.854157 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2026-01-03 00:42:59.854182 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.98s 2026-01-03 00:42:59.854237 | orchestrator | Add known links to the list of available block devices ------------------ 0.81s 2026-01-03 00:42:59.854249 | orchestrator | Print configuration data ------------------------------------------------ 0.78s 2026-01-03 00:42:59.854267 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-01-03 00:42:59.854280 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2026-01-03 00:42:59.854291 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2026-01-03 00:42:59.854302 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2026-01-03 00:42:59.854313 | orchestrator | Get initial list of available block devices ----------------------------- 0.64s 2026-01-03 00:42:59.854325 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-01-03 00:42:59.854345 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2026-01-03 00:43:00.057429 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.58s 2026-01-03 00:43:00.057517 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2026-01-03 00:43:00.057528 | orchestrator | Add known partitions to the list of available block devices ------------- 0.56s 2026-01-03 00:43:00.057536 | orchestrator | Add known partitions to the list of available block devices ------------- 0.48s 2026-01-03 00:43:00.057545 | orchestrator | Print WAL devices ------------------------------------------------------- 0.47s 2026-01-03 00:43:00.057553 | orchestrator | Set DB devices config data ---------------------------------------------- 0.47s 2026-01-03 00:43:22.743105 | orchestrator | 2026-01-03 00:43:22 | INFO  | Task 33567897-60e1-45a8-9f5c-c338696be5c3 (sync inventory) is running in background. Output coming soon. 2026-01-03 00:43:47.619104 | orchestrator | 2026-01-03 00:43:24 | INFO  | Starting group_vars file reorganization 2026-01-03 00:43:47.619185 | orchestrator | 2026-01-03 00:43:24 | INFO  | Moved 0 file(s) to their respective directories 2026-01-03 00:43:47.619194 | orchestrator | 2026-01-03 00:43:24 | INFO  | Group_vars file reorganization completed 2026-01-03 00:43:47.619201 | orchestrator | 2026-01-03 00:43:27 | INFO  | Starting variable preparation from inventory 2026-01-03 00:43:47.619207 | orchestrator | 2026-01-03 00:43:29 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-03 00:43:47.619213 | orchestrator | 2026-01-03 00:43:29 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-03 00:43:47.619219 | orchestrator | 2026-01-03 00:43:29 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-03 00:43:47.619225 | orchestrator | 2026-01-03 00:43:29 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-03 00:43:47.619231 | orchestrator | 2026-01-03 00:43:29 | INFO  | Variable preparation completed 2026-01-03 00:43:47.619236 | orchestrator | 2026-01-03 00:43:30 | INFO  | Starting inventory overwrite handling 2026-01-03 00:43:47.619242 | orchestrator | 2026-01-03 00:43:30 | INFO  | Handling group overwrites in 99-overwrite 2026-01-03 00:43:47.619248 | orchestrator | 2026-01-03 00:43:30 | INFO  | Removing group frr:children from 60-generic 2026-01-03 00:43:47.619254 | orchestrator | 2026-01-03 00:43:30 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-03 00:43:47.619259 | orchestrator | 2026-01-03 00:43:30 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-03 00:43:47.619265 | orchestrator | 2026-01-03 00:43:30 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-03 00:43:47.619271 | orchestrator | 2026-01-03 00:43:30 | INFO  | Handling group overwrites in 20-roles 2026-01-03 00:43:47.619298 | orchestrator | 2026-01-03 00:43:30 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-03 00:43:47.619304 | orchestrator | 2026-01-03 00:43:30 | INFO  | Removed 5 group(s) in total 2026-01-03 00:43:47.619309 | orchestrator | 2026-01-03 00:43:30 | INFO  | Inventory overwrite handling completed 2026-01-03 00:43:47.619319 | orchestrator | 2026-01-03 00:43:31 | INFO  | Starting merge of inventory files 2026-01-03 00:43:47.619328 | orchestrator | 2026-01-03 00:43:31 | INFO  | Inventory files merged successfully 2026-01-03 00:43:47.619337 | orchestrator | 2026-01-03 00:43:36 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-03 00:43:47.619346 | orchestrator | 2026-01-03 00:43:46 | INFO  | Successfully wrote ClusterShell configuration 2026-01-03 00:43:47.619356 | orchestrator | [master 265f15b] 2026-01-03-00-43 2026-01-03 00:43:47.619367 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-01-03 00:43:49.613747 | orchestrator | 2026-01-03 00:43:49 | INFO  | Task 4057fd44-c477-4901-8f32-01cbf0f98d6f (ceph-create-lvm-devices) was prepared for execution. 2026-01-03 00:43:49.613876 | orchestrator | 2026-01-03 00:43:49 | INFO  | It takes a moment until task 4057fd44-c477-4901-8f32-01cbf0f98d6f (ceph-create-lvm-devices) has been started and output is visible here. 2026-01-03 00:44:00.278385 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-03 00:44:00.278545 | orchestrator | 2.16.14 2026-01-03 00:44:00.278564 | orchestrator | 2026-01-03 00:44:00.278577 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-03 00:44:00.278590 | orchestrator | 2026-01-03 00:44:00.278602 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-03 00:44:00.278614 | orchestrator | Saturday 03 January 2026 00:43:53 +0000 (0:00:00.273) 0:00:00.273 ****** 2026-01-03 00:44:00.278627 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-03 00:44:00.278799 | orchestrator | 2026-01-03 00:44:00.278811 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-03 00:44:00.278822 | orchestrator | Saturday 03 January 2026 00:43:53 +0000 (0:00:00.253) 0:00:00.526 ****** 2026-01-03 00:44:00.278834 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:44:00.278846 | orchestrator | 2026-01-03 00:44:00.278862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:00.278876 | orchestrator | Saturday 03 January 2026 00:43:54 +0000 (0:00:00.204) 0:00:00.731 ****** 2026-01-03 00:44:00.278889 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-03 00:44:00.278903 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-03 00:44:00.278916 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-03 00:44:00.278929 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-03 00:44:00.278942 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-03 00:44:00.278954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-03 00:44:00.278967 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-03 00:44:00.278980 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-03 00:44:00.278994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-03 00:44:00.279033 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-03 00:44:00.279045 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-03 00:44:00.279056 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-03 00:44:00.279095 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-03 00:44:00.279106 | orchestrator | 2026-01-03 00:44:00.279117 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:00.279128 | orchestrator | Saturday 03 January 2026 00:43:54 +0000 (0:00:00.565) 0:00:01.296 ****** 2026-01-03 00:44:00.279142 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:00.279162 | orchestrator | 2026-01-03 00:44:00.279180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:00.279198 | orchestrator | Saturday 03 January 2026 00:43:54 +0000 (0:00:00.204) 0:00:01.500 ****** 2026-01-03 00:44:00.279215 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:00.279233 | orchestrator | 2026-01-03 00:44:00.279256 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:00.279272 | orchestrator | Saturday 03 January 2026 00:43:55 +0000 (0:00:00.212) 0:00:01.713 ****** 2026-01-03 00:44:00.279288 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:00.279305 | orchestrator | 2026-01-03 00:44:00.279321 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:00.279338 | orchestrator | Saturday 03 January 2026 00:43:55 +0000 (0:00:00.183) 0:00:01.896 ****** 2026-01-03 00:44:00.279354 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:00.279371 | orchestrator | 2026-01-03 00:44:00.279389 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:00.279409 | orchestrator | Saturday 03 January 2026 00:43:55 +0000 (0:00:00.192) 0:00:02.088 ****** 2026-01-03 00:44:00.279426 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:00.279444 | orchestrator | 2026-01-03 00:44:00.279462 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:00.279480 | orchestrator | Saturday 03 January 2026 00:43:55 +0000 (0:00:00.203) 0:00:02.292 ****** 2026-01-03 00:44:00.279496 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:00.279512 | orchestrator | 2026-01-03 00:44:00.279528 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:00.279545 | orchestrator | Saturday 03 January 2026 00:43:55 +0000 (0:00:00.222) 0:00:02.515 ****** 2026-01-03 00:44:00.279563 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:00.279579 | orchestrator | 2026-01-03 00:44:00.279596 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:00.279614 | orchestrator | Saturday 03 January 2026 00:43:56 +0000 (0:00:00.221) 0:00:02.736 ****** 2026-01-03 00:44:00.279684 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:00.279709 | orchestrator | 2026-01-03 00:44:00.279728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:00.279747 | orchestrator | Saturday 03 January 2026 00:43:56 +0000 (0:00:00.192) 0:00:02.929 ****** 2026-01-03 00:44:00.279764 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36) 2026-01-03 00:44:00.279784 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36) 2026-01-03 00:44:00.279803 | orchestrator | 2026-01-03 00:44:00.279821 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:00.279875 | orchestrator | Saturday 03 January 2026 00:43:56 +0000 (0:00:00.393) 0:00:03.323 ****** 2026-01-03 00:44:00.279896 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_87925512-ad51-4d26-92cc-5f354ec37d18) 2026-01-03 00:44:00.279915 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_87925512-ad51-4d26-92cc-5f354ec37d18) 2026-01-03 00:44:00.279934 | orchestrator | 2026-01-03 00:44:00.279951 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:00.279969 | orchestrator | Saturday 03 January 2026 00:43:57 +0000 (0:00:00.514) 0:00:03.838 ****** 2026-01-03 00:44:00.279987 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5af48d4e-a5d9-4c77-9873-39f930691ccf) 2026-01-03 00:44:00.280025 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5af48d4e-a5d9-4c77-9873-39f930691ccf) 2026-01-03 00:44:00.280044 | orchestrator | 2026-01-03 00:44:00.280063 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:00.280080 | orchestrator | Saturday 03 January 2026 00:43:57 +0000 (0:00:00.520) 0:00:04.358 ****** 2026-01-03 00:44:00.280099 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0b3b8b78-42ce-4774-a4e8-f10424aa2bf1) 2026-01-03 00:44:00.280117 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0b3b8b78-42ce-4774-a4e8-f10424aa2bf1) 2026-01-03 00:44:00.280134 | orchestrator | 2026-01-03 00:44:00.280150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:00.280167 | orchestrator | Saturday 03 January 2026 00:43:58 +0000 (0:00:00.663) 0:00:05.022 ****** 2026-01-03 00:44:00.280184 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-03 00:44:00.280201 | orchestrator | 2026-01-03 00:44:00.280217 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:00.280234 | orchestrator | Saturday 03 January 2026 00:43:58 +0000 (0:00:00.289) 0:00:05.311 ****** 2026-01-03 00:44:00.280251 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-03 00:44:00.280271 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-03 00:44:00.280289 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-03 00:44:00.280307 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-03 00:44:00.280326 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-03 00:44:00.280345 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-03 00:44:00.280363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-03 00:44:00.280382 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-03 00:44:00.280395 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-03 00:44:00.280406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-03 00:44:00.280418 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-03 00:44:00.280429 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-03 00:44:00.280440 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-03 00:44:00.280451 | orchestrator | 2026-01-03 00:44:00.280462 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:00.280473 | orchestrator | Saturday 03 January 2026 00:43:58 +0000 (0:00:00.368) 0:00:05.679 ****** 2026-01-03 00:44:00.280484 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:00.280495 | orchestrator | 2026-01-03 00:44:00.280506 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:00.280517 | orchestrator | Saturday 03 January 2026 00:43:59 +0000 (0:00:00.188) 0:00:05.868 ****** 2026-01-03 00:44:00.280527 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:00.280538 | orchestrator | 2026-01-03 00:44:00.280549 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:00.280560 | orchestrator | Saturday 03 January 2026 00:43:59 +0000 (0:00:00.176) 0:00:06.045 ****** 2026-01-03 00:44:00.280570 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:00.280581 | orchestrator | 2026-01-03 00:44:00.280592 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:00.280603 | orchestrator | Saturday 03 January 2026 00:43:59 +0000 (0:00:00.183) 0:00:06.228 ****** 2026-01-03 00:44:00.280614 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:00.280661 | orchestrator | 2026-01-03 00:44:00.280673 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:00.280684 | orchestrator | Saturday 03 January 2026 00:43:59 +0000 (0:00:00.187) 0:00:06.416 ****** 2026-01-03 00:44:00.280695 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:00.280706 | orchestrator | 2026-01-03 00:44:00.280716 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:00.280727 | orchestrator | Saturday 03 January 2026 00:43:59 +0000 (0:00:00.179) 0:00:06.596 ****** 2026-01-03 00:44:00.280738 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:00.280749 | orchestrator | 2026-01-03 00:44:00.280760 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:00.280771 | orchestrator | Saturday 03 January 2026 00:44:00 +0000 (0:00:00.170) 0:00:06.766 ****** 2026-01-03 00:44:00.280782 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:00.280792 | orchestrator | 2026-01-03 00:44:00.280826 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:07.643780 | orchestrator | Saturday 03 January 2026 00:44:00 +0000 (0:00:00.197) 0:00:06.963 ****** 2026-01-03 00:44:07.643899 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:07.643911 | orchestrator | 2026-01-03 00:44:07.643920 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:07.643928 | orchestrator | Saturday 03 January 2026 00:44:00 +0000 (0:00:00.167) 0:00:07.131 ****** 2026-01-03 00:44:07.643937 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-03 00:44:07.643945 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-03 00:44:07.643954 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-03 00:44:07.643961 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-03 00:44:07.643969 | orchestrator | 2026-01-03 00:44:07.643977 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:07.643984 | orchestrator | Saturday 03 January 2026 00:44:01 +0000 (0:00:00.852) 0:00:07.983 ****** 2026-01-03 00:44:07.643992 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:07.643999 | orchestrator | 2026-01-03 00:44:07.644006 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:07.644014 | orchestrator | Saturday 03 January 2026 00:44:01 +0000 (0:00:00.167) 0:00:08.151 ****** 2026-01-03 00:44:07.644022 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:07.644029 | orchestrator | 2026-01-03 00:44:07.644037 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:07.644045 | orchestrator | Saturday 03 January 2026 00:44:01 +0000 (0:00:00.162) 0:00:08.314 ****** 2026-01-03 00:44:07.644052 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:07.644059 | orchestrator | 2026-01-03 00:44:07.644067 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:07.644074 | orchestrator | Saturday 03 January 2026 00:44:01 +0000 (0:00:00.175) 0:00:08.489 ****** 2026-01-03 00:44:07.644081 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:07.644088 | orchestrator | 2026-01-03 00:44:07.644096 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-03 00:44:07.644103 | orchestrator | Saturday 03 January 2026 00:44:01 +0000 (0:00:00.189) 0:00:08.679 ****** 2026-01-03 00:44:07.644110 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:07.644117 | orchestrator | 2026-01-03 00:44:07.644125 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-03 00:44:07.644133 | orchestrator | Saturday 03 January 2026 00:44:02 +0000 (0:00:00.130) 0:00:08.810 ****** 2026-01-03 00:44:07.644164 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '147f94e4-6564-5421-8ac2-dc0697a6d722'}}) 2026-01-03 00:44:07.644173 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '43909478-d18c-58e7-896e-8d0e3e550915'}}) 2026-01-03 00:44:07.644180 | orchestrator | 2026-01-03 00:44:07.644188 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-03 00:44:07.644222 | orchestrator | Saturday 03 January 2026 00:44:02 +0000 (0:00:00.173) 0:00:08.984 ****** 2026-01-03 00:44:07.644233 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722', 'data_vg': 'ceph-147f94e4-6564-5421-8ac2-dc0697a6d722'}) 2026-01-03 00:44:07.644246 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-43909478-d18c-58e7-896e-8d0e3e550915', 'data_vg': 'ceph-43909478-d18c-58e7-896e-8d0e3e550915'}) 2026-01-03 00:44:07.644256 | orchestrator | 2026-01-03 00:44:07.644272 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-03 00:44:07.644283 | orchestrator | Saturday 03 January 2026 00:44:04 +0000 (0:00:01.925) 0:00:10.910 ****** 2026-01-03 00:44:07.644293 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722', 'data_vg': 'ceph-147f94e4-6564-5421-8ac2-dc0697a6d722'})  2026-01-03 00:44:07.644306 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-43909478-d18c-58e7-896e-8d0e3e550915', 'data_vg': 'ceph-43909478-d18c-58e7-896e-8d0e3e550915'})  2026-01-03 00:44:07.644316 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:07.644327 | orchestrator | 2026-01-03 00:44:07.644336 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-03 00:44:07.644345 | orchestrator | Saturday 03 January 2026 00:44:04 +0000 (0:00:00.143) 0:00:11.053 ****** 2026-01-03 00:44:07.644354 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722', 'data_vg': 'ceph-147f94e4-6564-5421-8ac2-dc0697a6d722'}) 2026-01-03 00:44:07.644363 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-43909478-d18c-58e7-896e-8d0e3e550915', 'data_vg': 'ceph-43909478-d18c-58e7-896e-8d0e3e550915'}) 2026-01-03 00:44:07.644371 | orchestrator | 2026-01-03 00:44:07.644381 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-03 00:44:07.644390 | orchestrator | Saturday 03 January 2026 00:44:05 +0000 (0:00:01.523) 0:00:12.577 ****** 2026-01-03 00:44:07.644399 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722', 'data_vg': 'ceph-147f94e4-6564-5421-8ac2-dc0697a6d722'})  2026-01-03 00:44:07.644408 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-43909478-d18c-58e7-896e-8d0e3e550915', 'data_vg': 'ceph-43909478-d18c-58e7-896e-8d0e3e550915'})  2026-01-03 00:44:07.644417 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:07.644425 | orchestrator | 2026-01-03 00:44:07.644434 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-03 00:44:07.644443 | orchestrator | Saturday 03 January 2026 00:44:06 +0000 (0:00:00.160) 0:00:12.737 ****** 2026-01-03 00:44:07.644470 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:07.644480 | orchestrator | 2026-01-03 00:44:07.644489 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-03 00:44:07.644498 | orchestrator | Saturday 03 January 2026 00:44:06 +0000 (0:00:00.138) 0:00:12.876 ****** 2026-01-03 00:44:07.644506 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722', 'data_vg': 'ceph-147f94e4-6564-5421-8ac2-dc0697a6d722'})  2026-01-03 00:44:07.644515 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-43909478-d18c-58e7-896e-8d0e3e550915', 'data_vg': 'ceph-43909478-d18c-58e7-896e-8d0e3e550915'})  2026-01-03 00:44:07.644524 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:07.644533 | orchestrator | 2026-01-03 00:44:07.644542 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-03 00:44:07.644550 | orchestrator | Saturday 03 January 2026 00:44:06 +0000 (0:00:00.253) 0:00:13.130 ****** 2026-01-03 00:44:07.644559 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:07.644568 | orchestrator | 2026-01-03 00:44:07.644577 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-03 00:44:07.644586 | orchestrator | Saturday 03 January 2026 00:44:06 +0000 (0:00:00.129) 0:00:13.259 ****** 2026-01-03 00:44:07.644601 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722', 'data_vg': 'ceph-147f94e4-6564-5421-8ac2-dc0697a6d722'})  2026-01-03 00:44:07.644610 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-43909478-d18c-58e7-896e-8d0e3e550915', 'data_vg': 'ceph-43909478-d18c-58e7-896e-8d0e3e550915'})  2026-01-03 00:44:07.644619 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:07.644650 | orchestrator | 2026-01-03 00:44:07.644660 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-03 00:44:07.644669 | orchestrator | Saturday 03 January 2026 00:44:06 +0000 (0:00:00.117) 0:00:13.377 ****** 2026-01-03 00:44:07.644677 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:07.644686 | orchestrator | 2026-01-03 00:44:07.644695 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-03 00:44:07.644704 | orchestrator | Saturday 03 January 2026 00:44:06 +0000 (0:00:00.115) 0:00:13.493 ****** 2026-01-03 00:44:07.644712 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722', 'data_vg': 'ceph-147f94e4-6564-5421-8ac2-dc0697a6d722'})  2026-01-03 00:44:07.644721 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-43909478-d18c-58e7-896e-8d0e3e550915', 'data_vg': 'ceph-43909478-d18c-58e7-896e-8d0e3e550915'})  2026-01-03 00:44:07.644730 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:07.644739 | orchestrator | 2026-01-03 00:44:07.644747 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-03 00:44:07.644756 | orchestrator | Saturday 03 January 2026 00:44:06 +0000 (0:00:00.130) 0:00:13.623 ****** 2026-01-03 00:44:07.644765 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:44:07.644774 | orchestrator | 2026-01-03 00:44:07.644783 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-03 00:44:07.644791 | orchestrator | Saturday 03 January 2026 00:44:07 +0000 (0:00:00.145) 0:00:13.769 ****** 2026-01-03 00:44:07.644805 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722', 'data_vg': 'ceph-147f94e4-6564-5421-8ac2-dc0697a6d722'})  2026-01-03 00:44:07.644814 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-43909478-d18c-58e7-896e-8d0e3e550915', 'data_vg': 'ceph-43909478-d18c-58e7-896e-8d0e3e550915'})  2026-01-03 00:44:07.644823 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:07.644832 | orchestrator | 2026-01-03 00:44:07.644841 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-03 00:44:07.644849 | orchestrator | Saturday 03 January 2026 00:44:07 +0000 (0:00:00.145) 0:00:13.915 ****** 2026-01-03 00:44:07.644858 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722', 'data_vg': 'ceph-147f94e4-6564-5421-8ac2-dc0697a6d722'})  2026-01-03 00:44:07.644867 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-43909478-d18c-58e7-896e-8d0e3e550915', 'data_vg': 'ceph-43909478-d18c-58e7-896e-8d0e3e550915'})  2026-01-03 00:44:07.644876 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:07.644884 | orchestrator | 2026-01-03 00:44:07.644893 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-03 00:44:07.644902 | orchestrator | Saturday 03 January 2026 00:44:07 +0000 (0:00:00.147) 0:00:14.062 ****** 2026-01-03 00:44:07.644910 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722', 'data_vg': 'ceph-147f94e4-6564-5421-8ac2-dc0697a6d722'})  2026-01-03 00:44:07.644919 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-43909478-d18c-58e7-896e-8d0e3e550915', 'data_vg': 'ceph-43909478-d18c-58e7-896e-8d0e3e550915'})  2026-01-03 00:44:07.644928 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:07.644937 | orchestrator | 2026-01-03 00:44:07.644945 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-03 00:44:07.644960 | orchestrator | Saturday 03 January 2026 00:44:07 +0000 (0:00:00.140) 0:00:14.202 ****** 2026-01-03 00:44:07.644969 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:07.644978 | orchestrator | 2026-01-03 00:44:07.644987 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-03 00:44:07.645001 | orchestrator | Saturday 03 January 2026 00:44:07 +0000 (0:00:00.127) 0:00:14.330 ****** 2026-01-03 00:44:13.801917 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:13.802005 | orchestrator | 2026-01-03 00:44:13.802057 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-03 00:44:13.802067 | orchestrator | Saturday 03 January 2026 00:44:07 +0000 (0:00:00.128) 0:00:14.458 ****** 2026-01-03 00:44:13.802074 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:13.802082 | orchestrator | 2026-01-03 00:44:13.802089 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-03 00:44:13.802097 | orchestrator | Saturday 03 January 2026 00:44:07 +0000 (0:00:00.117) 0:00:14.575 ****** 2026-01-03 00:44:13.802104 | orchestrator | ok: [testbed-node-3] => { 2026-01-03 00:44:13.802112 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-03 00:44:13.802120 | orchestrator | } 2026-01-03 00:44:13.802127 | orchestrator | 2026-01-03 00:44:13.802134 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-03 00:44:13.802142 | orchestrator | Saturday 03 January 2026 00:44:08 +0000 (0:00:00.248) 0:00:14.823 ****** 2026-01-03 00:44:13.802149 | orchestrator | ok: [testbed-node-3] => { 2026-01-03 00:44:13.802156 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-03 00:44:13.802163 | orchestrator | } 2026-01-03 00:44:13.802170 | orchestrator | 2026-01-03 00:44:13.802178 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-03 00:44:13.802185 | orchestrator | Saturday 03 January 2026 00:44:08 +0000 (0:00:00.128) 0:00:14.952 ****** 2026-01-03 00:44:13.802193 | orchestrator | ok: [testbed-node-3] => { 2026-01-03 00:44:13.802200 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-03 00:44:13.802208 | orchestrator | } 2026-01-03 00:44:13.802215 | orchestrator | 2026-01-03 00:44:13.802222 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-03 00:44:13.802229 | orchestrator | Saturday 03 January 2026 00:44:08 +0000 (0:00:00.124) 0:00:15.077 ****** 2026-01-03 00:44:13.802236 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:44:13.802244 | orchestrator | 2026-01-03 00:44:13.802251 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-03 00:44:13.802258 | orchestrator | Saturday 03 January 2026 00:44:09 +0000 (0:00:00.656) 0:00:15.734 ****** 2026-01-03 00:44:13.802265 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:44:13.802273 | orchestrator | 2026-01-03 00:44:13.802280 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-03 00:44:13.802287 | orchestrator | Saturday 03 January 2026 00:44:09 +0000 (0:00:00.559) 0:00:16.293 ****** 2026-01-03 00:44:13.802294 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:44:13.802301 | orchestrator | 2026-01-03 00:44:13.802308 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-03 00:44:13.802316 | orchestrator | Saturday 03 January 2026 00:44:10 +0000 (0:00:00.510) 0:00:16.804 ****** 2026-01-03 00:44:13.802323 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:44:13.802330 | orchestrator | 2026-01-03 00:44:13.802337 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-03 00:44:13.802344 | orchestrator | Saturday 03 January 2026 00:44:10 +0000 (0:00:00.121) 0:00:16.925 ****** 2026-01-03 00:44:13.802351 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:13.802359 | orchestrator | 2026-01-03 00:44:13.802366 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-03 00:44:13.802373 | orchestrator | Saturday 03 January 2026 00:44:10 +0000 (0:00:00.126) 0:00:17.051 ****** 2026-01-03 00:44:13.802380 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:13.802387 | orchestrator | 2026-01-03 00:44:13.802394 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-03 00:44:13.802422 | orchestrator | Saturday 03 January 2026 00:44:10 +0000 (0:00:00.118) 0:00:17.170 ****** 2026-01-03 00:44:13.802429 | orchestrator | ok: [testbed-node-3] => { 2026-01-03 00:44:13.802437 | orchestrator |  "vgs_report": { 2026-01-03 00:44:13.802444 | orchestrator |  "vg": [] 2026-01-03 00:44:13.802451 | orchestrator |  } 2026-01-03 00:44:13.802460 | orchestrator | } 2026-01-03 00:44:13.802468 | orchestrator | 2026-01-03 00:44:13.802476 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-03 00:44:13.802485 | orchestrator | Saturday 03 January 2026 00:44:10 +0000 (0:00:00.134) 0:00:17.304 ****** 2026-01-03 00:44:13.802493 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:13.802501 | orchestrator | 2026-01-03 00:44:13.802510 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-03 00:44:13.802532 | orchestrator | Saturday 03 January 2026 00:44:10 +0000 (0:00:00.117) 0:00:17.422 ****** 2026-01-03 00:44:13.802542 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:13.802550 | orchestrator | 2026-01-03 00:44:13.802558 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-03 00:44:13.802566 | orchestrator | Saturday 03 January 2026 00:44:10 +0000 (0:00:00.146) 0:00:17.569 ****** 2026-01-03 00:44:13.802574 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:13.802581 | orchestrator | 2026-01-03 00:44:13.802594 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-03 00:44:13.802607 | orchestrator | Saturday 03 January 2026 00:44:11 +0000 (0:00:00.240) 0:00:17.809 ****** 2026-01-03 00:44:13.802617 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:13.802649 | orchestrator | 2026-01-03 00:44:13.802662 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-03 00:44:13.802674 | orchestrator | Saturday 03 January 2026 00:44:11 +0000 (0:00:00.151) 0:00:17.961 ****** 2026-01-03 00:44:13.802685 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:13.802697 | orchestrator | 2026-01-03 00:44:13.802709 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-03 00:44:13.802722 | orchestrator | Saturday 03 January 2026 00:44:11 +0000 (0:00:00.158) 0:00:18.119 ****** 2026-01-03 00:44:13.802734 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:13.802746 | orchestrator | 2026-01-03 00:44:13.802758 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-03 00:44:13.802770 | orchestrator | Saturday 03 January 2026 00:44:11 +0000 (0:00:00.138) 0:00:18.258 ****** 2026-01-03 00:44:13.802783 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:13.802795 | orchestrator | 2026-01-03 00:44:13.802808 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-03 00:44:13.802816 | orchestrator | Saturday 03 January 2026 00:44:11 +0000 (0:00:00.125) 0:00:18.383 ****** 2026-01-03 00:44:13.802840 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:13.802848 | orchestrator | 2026-01-03 00:44:13.802855 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-03 00:44:13.802863 | orchestrator | Saturday 03 January 2026 00:44:11 +0000 (0:00:00.130) 0:00:18.514 ****** 2026-01-03 00:44:13.802870 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:13.802877 | orchestrator | 2026-01-03 00:44:13.802885 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-03 00:44:13.802892 | orchestrator | Saturday 03 January 2026 00:44:11 +0000 (0:00:00.145) 0:00:18.659 ****** 2026-01-03 00:44:13.802899 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:13.802906 | orchestrator | 2026-01-03 00:44:13.802914 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-03 00:44:13.802921 | orchestrator | Saturday 03 January 2026 00:44:12 +0000 (0:00:00.144) 0:00:18.804 ****** 2026-01-03 00:44:13.802928 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:13.802935 | orchestrator | 2026-01-03 00:44:13.802943 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-03 00:44:13.802950 | orchestrator | Saturday 03 January 2026 00:44:12 +0000 (0:00:00.126) 0:00:18.930 ****** 2026-01-03 00:44:13.802965 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:13.802973 | orchestrator | 2026-01-03 00:44:13.802980 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-03 00:44:13.802987 | orchestrator | Saturday 03 January 2026 00:44:12 +0000 (0:00:00.141) 0:00:19.072 ****** 2026-01-03 00:44:13.802995 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:13.803002 | orchestrator | 2026-01-03 00:44:13.803009 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-03 00:44:13.803017 | orchestrator | Saturday 03 January 2026 00:44:12 +0000 (0:00:00.131) 0:00:19.204 ****** 2026-01-03 00:44:13.803024 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:13.803031 | orchestrator | 2026-01-03 00:44:13.803039 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-03 00:44:13.803057 | orchestrator | Saturday 03 January 2026 00:44:12 +0000 (0:00:00.140) 0:00:19.344 ****** 2026-01-03 00:44:13.803074 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722', 'data_vg': 'ceph-147f94e4-6564-5421-8ac2-dc0697a6d722'})  2026-01-03 00:44:13.803084 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-43909478-d18c-58e7-896e-8d0e3e550915', 'data_vg': 'ceph-43909478-d18c-58e7-896e-8d0e3e550915'})  2026-01-03 00:44:13.803091 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:13.803098 | orchestrator | 2026-01-03 00:44:13.803106 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-03 00:44:13.803113 | orchestrator | Saturday 03 January 2026 00:44:12 +0000 (0:00:00.344) 0:00:19.688 ****** 2026-01-03 00:44:13.803120 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722', 'data_vg': 'ceph-147f94e4-6564-5421-8ac2-dc0697a6d722'})  2026-01-03 00:44:13.803128 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-43909478-d18c-58e7-896e-8d0e3e550915', 'data_vg': 'ceph-43909478-d18c-58e7-896e-8d0e3e550915'})  2026-01-03 00:44:13.803135 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:13.803142 | orchestrator | 2026-01-03 00:44:13.803150 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-03 00:44:13.803162 | orchestrator | Saturday 03 January 2026 00:44:13 +0000 (0:00:00.169) 0:00:19.858 ****** 2026-01-03 00:44:13.803170 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722', 'data_vg': 'ceph-147f94e4-6564-5421-8ac2-dc0697a6d722'})  2026-01-03 00:44:13.803177 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-43909478-d18c-58e7-896e-8d0e3e550915', 'data_vg': 'ceph-43909478-d18c-58e7-896e-8d0e3e550915'})  2026-01-03 00:44:13.803184 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:13.803192 | orchestrator | 2026-01-03 00:44:13.803199 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-03 00:44:13.803206 | orchestrator | Saturday 03 January 2026 00:44:13 +0000 (0:00:00.161) 0:00:20.019 ****** 2026-01-03 00:44:13.803213 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722', 'data_vg': 'ceph-147f94e4-6564-5421-8ac2-dc0697a6d722'})  2026-01-03 00:44:13.803221 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-43909478-d18c-58e7-896e-8d0e3e550915', 'data_vg': 'ceph-43909478-d18c-58e7-896e-8d0e3e550915'})  2026-01-03 00:44:13.803228 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:13.803235 | orchestrator | 2026-01-03 00:44:13.803243 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-03 00:44:13.803250 | orchestrator | Saturday 03 January 2026 00:44:13 +0000 (0:00:00.153) 0:00:20.173 ****** 2026-01-03 00:44:13.803257 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722', 'data_vg': 'ceph-147f94e4-6564-5421-8ac2-dc0697a6d722'})  2026-01-03 00:44:13.803265 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-43909478-d18c-58e7-896e-8d0e3e550915', 'data_vg': 'ceph-43909478-d18c-58e7-896e-8d0e3e550915'})  2026-01-03 00:44:13.803277 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:13.803284 | orchestrator | 2026-01-03 00:44:13.803291 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-03 00:44:13.803299 | orchestrator | Saturday 03 January 2026 00:44:13 +0000 (0:00:00.158) 0:00:20.332 ****** 2026-01-03 00:44:13.803311 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722', 'data_vg': 'ceph-147f94e4-6564-5421-8ac2-dc0697a6d722'})  2026-01-03 00:44:18.735819 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-43909478-d18c-58e7-896e-8d0e3e550915', 'data_vg': 'ceph-43909478-d18c-58e7-896e-8d0e3e550915'})  2026-01-03 00:44:18.735933 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:18.735943 | orchestrator | 2026-01-03 00:44:18.735951 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-03 00:44:18.735959 | orchestrator | Saturday 03 January 2026 00:44:13 +0000 (0:00:00.156) 0:00:20.488 ****** 2026-01-03 00:44:18.735966 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722', 'data_vg': 'ceph-147f94e4-6564-5421-8ac2-dc0697a6d722'})  2026-01-03 00:44:18.735973 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-43909478-d18c-58e7-896e-8d0e3e550915', 'data_vg': 'ceph-43909478-d18c-58e7-896e-8d0e3e550915'})  2026-01-03 00:44:18.735979 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:18.735984 | orchestrator | 2026-01-03 00:44:18.735991 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-03 00:44:18.735997 | orchestrator | Saturday 03 January 2026 00:44:13 +0000 (0:00:00.163) 0:00:20.652 ****** 2026-01-03 00:44:18.736003 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722', 'data_vg': 'ceph-147f94e4-6564-5421-8ac2-dc0697a6d722'})  2026-01-03 00:44:18.736009 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-43909478-d18c-58e7-896e-8d0e3e550915', 'data_vg': 'ceph-43909478-d18c-58e7-896e-8d0e3e550915'})  2026-01-03 00:44:18.736015 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:18.736021 | orchestrator | 2026-01-03 00:44:18.736027 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-03 00:44:18.736032 | orchestrator | Saturday 03 January 2026 00:44:14 +0000 (0:00:00.143) 0:00:20.796 ****** 2026-01-03 00:44:18.736038 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:44:18.736046 | orchestrator | 2026-01-03 00:44:18.736051 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-03 00:44:18.736057 | orchestrator | Saturday 03 January 2026 00:44:14 +0000 (0:00:00.536) 0:00:21.332 ****** 2026-01-03 00:44:18.736063 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:44:18.736069 | orchestrator | 2026-01-03 00:44:18.736075 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-03 00:44:18.736080 | orchestrator | Saturday 03 January 2026 00:44:15 +0000 (0:00:00.573) 0:00:21.905 ****** 2026-01-03 00:44:18.736086 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:44:18.736092 | orchestrator | 2026-01-03 00:44:18.736098 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-03 00:44:18.736103 | orchestrator | Saturday 03 January 2026 00:44:15 +0000 (0:00:00.199) 0:00:22.104 ****** 2026-01-03 00:44:18.736109 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722', 'vg_name': 'ceph-147f94e4-6564-5421-8ac2-dc0697a6d722'}) 2026-01-03 00:44:18.736118 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-43909478-d18c-58e7-896e-8d0e3e550915', 'vg_name': 'ceph-43909478-d18c-58e7-896e-8d0e3e550915'}) 2026-01-03 00:44:18.736124 | orchestrator | 2026-01-03 00:44:18.736130 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-03 00:44:18.736136 | orchestrator | Saturday 03 January 2026 00:44:15 +0000 (0:00:00.190) 0:00:22.295 ****** 2026-01-03 00:44:18.736169 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722', 'data_vg': 'ceph-147f94e4-6564-5421-8ac2-dc0697a6d722'})  2026-01-03 00:44:18.736176 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-43909478-d18c-58e7-896e-8d0e3e550915', 'data_vg': 'ceph-43909478-d18c-58e7-896e-8d0e3e550915'})  2026-01-03 00:44:18.736182 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:18.736188 | orchestrator | 2026-01-03 00:44:18.736194 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-03 00:44:18.736199 | orchestrator | Saturday 03 January 2026 00:44:15 +0000 (0:00:00.332) 0:00:22.627 ****** 2026-01-03 00:44:18.736205 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722', 'data_vg': 'ceph-147f94e4-6564-5421-8ac2-dc0697a6d722'})  2026-01-03 00:44:18.736211 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-43909478-d18c-58e7-896e-8d0e3e550915', 'data_vg': 'ceph-43909478-d18c-58e7-896e-8d0e3e550915'})  2026-01-03 00:44:18.736217 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:18.736223 | orchestrator | 2026-01-03 00:44:18.736229 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-03 00:44:18.736235 | orchestrator | Saturday 03 January 2026 00:44:16 +0000 (0:00:00.150) 0:00:22.778 ****** 2026-01-03 00:44:18.736241 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722', 'data_vg': 'ceph-147f94e4-6564-5421-8ac2-dc0697a6d722'})  2026-01-03 00:44:18.736246 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-43909478-d18c-58e7-896e-8d0e3e550915', 'data_vg': 'ceph-43909478-d18c-58e7-896e-8d0e3e550915'})  2026-01-03 00:44:18.736252 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:44:18.736258 | orchestrator | 2026-01-03 00:44:18.736264 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-03 00:44:18.736270 | orchestrator | Saturday 03 January 2026 00:44:16 +0000 (0:00:00.123) 0:00:22.901 ****** 2026-01-03 00:44:18.736292 | orchestrator | ok: [testbed-node-3] => { 2026-01-03 00:44:18.736299 | orchestrator |  "lvm_report": { 2026-01-03 00:44:18.736305 | orchestrator |  "lv": [ 2026-01-03 00:44:18.736311 | orchestrator |  { 2026-01-03 00:44:18.736317 | orchestrator |  "lv_name": "osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722", 2026-01-03 00:44:18.736324 | orchestrator |  "vg_name": "ceph-147f94e4-6564-5421-8ac2-dc0697a6d722" 2026-01-03 00:44:18.736330 | orchestrator |  }, 2026-01-03 00:44:18.736336 | orchestrator |  { 2026-01-03 00:44:18.736341 | orchestrator |  "lv_name": "osd-block-43909478-d18c-58e7-896e-8d0e3e550915", 2026-01-03 00:44:18.736347 | orchestrator |  "vg_name": "ceph-43909478-d18c-58e7-896e-8d0e3e550915" 2026-01-03 00:44:18.736353 | orchestrator |  } 2026-01-03 00:44:18.736359 | orchestrator |  ], 2026-01-03 00:44:18.736365 | orchestrator |  "pv": [ 2026-01-03 00:44:18.736371 | orchestrator |  { 2026-01-03 00:44:18.736376 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-03 00:44:18.736382 | orchestrator |  "vg_name": "ceph-147f94e4-6564-5421-8ac2-dc0697a6d722" 2026-01-03 00:44:18.736388 | orchestrator |  }, 2026-01-03 00:44:18.736394 | orchestrator |  { 2026-01-03 00:44:18.736399 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-03 00:44:18.736405 | orchestrator |  "vg_name": "ceph-43909478-d18c-58e7-896e-8d0e3e550915" 2026-01-03 00:44:18.736430 | orchestrator |  } 2026-01-03 00:44:18.736436 | orchestrator |  ] 2026-01-03 00:44:18.736443 | orchestrator |  } 2026-01-03 00:44:18.736449 | orchestrator | } 2026-01-03 00:44:18.736455 | orchestrator | 2026-01-03 00:44:18.736461 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-03 00:44:18.736467 | orchestrator | 2026-01-03 00:44:18.736473 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-03 00:44:18.736485 | orchestrator | Saturday 03 January 2026 00:44:16 +0000 (0:00:00.248) 0:00:23.150 ****** 2026-01-03 00:44:18.736491 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-03 00:44:18.736497 | orchestrator | 2026-01-03 00:44:18.736503 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-03 00:44:18.736509 | orchestrator | Saturday 03 January 2026 00:44:16 +0000 (0:00:00.269) 0:00:23.420 ****** 2026-01-03 00:44:18.736515 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:44:18.736521 | orchestrator | 2026-01-03 00:44:18.736527 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:18.736533 | orchestrator | Saturday 03 January 2026 00:44:16 +0000 (0:00:00.215) 0:00:23.635 ****** 2026-01-03 00:44:18.736538 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-03 00:44:18.736544 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-03 00:44:18.736550 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-03 00:44:18.736556 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-03 00:44:18.736562 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-03 00:44:18.736568 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-03 00:44:18.736577 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-03 00:44:18.736583 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-03 00:44:18.736589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-03 00:44:18.736595 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-03 00:44:18.736601 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-03 00:44:18.736607 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-03 00:44:18.736612 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-03 00:44:18.736618 | orchestrator | 2026-01-03 00:44:18.736643 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:18.736649 | orchestrator | Saturday 03 January 2026 00:44:17 +0000 (0:00:00.362) 0:00:23.998 ****** 2026-01-03 00:44:18.736655 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:18.736661 | orchestrator | 2026-01-03 00:44:18.736667 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:18.736673 | orchestrator | Saturday 03 January 2026 00:44:17 +0000 (0:00:00.187) 0:00:24.185 ****** 2026-01-03 00:44:18.736678 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:18.736684 | orchestrator | 2026-01-03 00:44:18.736690 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:18.736696 | orchestrator | Saturday 03 January 2026 00:44:17 +0000 (0:00:00.174) 0:00:24.360 ****** 2026-01-03 00:44:18.736702 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:18.736708 | orchestrator | 2026-01-03 00:44:18.736713 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:18.736719 | orchestrator | Saturday 03 January 2026 00:44:18 +0000 (0:00:00.453) 0:00:24.813 ****** 2026-01-03 00:44:18.736725 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:18.736731 | orchestrator | 2026-01-03 00:44:18.736737 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:18.736743 | orchestrator | Saturday 03 January 2026 00:44:18 +0000 (0:00:00.227) 0:00:25.041 ****** 2026-01-03 00:44:18.736749 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:18.736754 | orchestrator | 2026-01-03 00:44:18.736760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:18.736771 | orchestrator | Saturday 03 January 2026 00:44:18 +0000 (0:00:00.171) 0:00:25.213 ****** 2026-01-03 00:44:18.736777 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:18.736783 | orchestrator | 2026-01-03 00:44:18.736793 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:28.776522 | orchestrator | Saturday 03 January 2026 00:44:18 +0000 (0:00:00.209) 0:00:25.422 ****** 2026-01-03 00:44:28.776686 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:28.776708 | orchestrator | 2026-01-03 00:44:28.776723 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:28.776736 | orchestrator | Saturday 03 January 2026 00:44:18 +0000 (0:00:00.190) 0:00:25.613 ****** 2026-01-03 00:44:28.776748 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:28.776760 | orchestrator | 2026-01-03 00:44:28.776774 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:28.776788 | orchestrator | Saturday 03 January 2026 00:44:19 +0000 (0:00:00.186) 0:00:25.799 ****** 2026-01-03 00:44:28.776802 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f) 2026-01-03 00:44:28.776818 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f) 2026-01-03 00:44:28.776828 | orchestrator | 2026-01-03 00:44:28.776836 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:28.776844 | orchestrator | Saturday 03 January 2026 00:44:19 +0000 (0:00:00.369) 0:00:26.169 ****** 2026-01-03 00:44:28.776852 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_38f41ea3-c1f8-47b6-a316-62c713a7ab6d) 2026-01-03 00:44:28.776861 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_38f41ea3-c1f8-47b6-a316-62c713a7ab6d) 2026-01-03 00:44:28.776869 | orchestrator | 2026-01-03 00:44:28.776877 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:28.776885 | orchestrator | Saturday 03 January 2026 00:44:19 +0000 (0:00:00.394) 0:00:26.563 ****** 2026-01-03 00:44:28.776892 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_65a14f46-a6f7-4da8-aafc-46a47f969b79) 2026-01-03 00:44:28.776900 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_65a14f46-a6f7-4da8-aafc-46a47f969b79) 2026-01-03 00:44:28.776908 | orchestrator | 2026-01-03 00:44:28.776916 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:28.776924 | orchestrator | Saturday 03 January 2026 00:44:20 +0000 (0:00:00.388) 0:00:26.952 ****** 2026-01-03 00:44:28.776932 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e5f15048-4e23-4094-a7eb-216bc02a3879) 2026-01-03 00:44:28.776940 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e5f15048-4e23-4094-a7eb-216bc02a3879) 2026-01-03 00:44:28.776948 | orchestrator | 2026-01-03 00:44:28.776955 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:28.776963 | orchestrator | Saturday 03 January 2026 00:44:20 +0000 (0:00:00.547) 0:00:27.499 ****** 2026-01-03 00:44:28.776971 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-03 00:44:28.776979 | orchestrator | 2026-01-03 00:44:28.776987 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:28.776995 | orchestrator | Saturday 03 January 2026 00:44:21 +0000 (0:00:00.471) 0:00:27.971 ****** 2026-01-03 00:44:28.777020 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-03 00:44:28.777028 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-03 00:44:28.777036 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-03 00:44:28.777047 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-03 00:44:28.777056 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-03 00:44:28.777086 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-03 00:44:28.777095 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-03 00:44:28.777104 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-03 00:44:28.777113 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-03 00:44:28.777122 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-03 00:44:28.777131 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-03 00:44:28.777141 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-03 00:44:28.777150 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-03 00:44:28.777160 | orchestrator | 2026-01-03 00:44:28.777169 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:28.777177 | orchestrator | Saturday 03 January 2026 00:44:21 +0000 (0:00:00.701) 0:00:28.673 ****** 2026-01-03 00:44:28.777185 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:28.777192 | orchestrator | 2026-01-03 00:44:28.777201 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:28.777209 | orchestrator | Saturday 03 January 2026 00:44:22 +0000 (0:00:00.182) 0:00:28.855 ****** 2026-01-03 00:44:28.777217 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:28.777225 | orchestrator | 2026-01-03 00:44:28.777233 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:28.777241 | orchestrator | Saturday 03 January 2026 00:44:22 +0000 (0:00:00.187) 0:00:29.043 ****** 2026-01-03 00:44:28.777248 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:28.777256 | orchestrator | 2026-01-03 00:44:28.777279 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:28.777288 | orchestrator | Saturday 03 January 2026 00:44:22 +0000 (0:00:00.181) 0:00:29.225 ****** 2026-01-03 00:44:28.777296 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:28.777304 | orchestrator | 2026-01-03 00:44:28.777312 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:28.777320 | orchestrator | Saturday 03 January 2026 00:44:22 +0000 (0:00:00.180) 0:00:29.405 ****** 2026-01-03 00:44:28.777328 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:28.777336 | orchestrator | 2026-01-03 00:44:28.777343 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:28.777351 | orchestrator | Saturday 03 January 2026 00:44:22 +0000 (0:00:00.181) 0:00:29.587 ****** 2026-01-03 00:44:28.777359 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:28.777367 | orchestrator | 2026-01-03 00:44:28.777375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:28.777383 | orchestrator | Saturday 03 January 2026 00:44:23 +0000 (0:00:00.205) 0:00:29.793 ****** 2026-01-03 00:44:28.777391 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:28.777399 | orchestrator | 2026-01-03 00:44:28.777407 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:28.777415 | orchestrator | Saturday 03 January 2026 00:44:23 +0000 (0:00:00.203) 0:00:29.996 ****** 2026-01-03 00:44:28.777422 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:28.777430 | orchestrator | 2026-01-03 00:44:28.777438 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:28.777446 | orchestrator | Saturday 03 January 2026 00:44:23 +0000 (0:00:00.181) 0:00:30.177 ****** 2026-01-03 00:44:28.777454 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-03 00:44:28.777462 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-03 00:44:28.777471 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-03 00:44:28.777479 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-03 00:44:28.777493 | orchestrator | 2026-01-03 00:44:28.777501 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:28.777509 | orchestrator | Saturday 03 January 2026 00:44:24 +0000 (0:00:00.739) 0:00:30.917 ****** 2026-01-03 00:44:28.777517 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:28.777525 | orchestrator | 2026-01-03 00:44:28.777534 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:28.777542 | orchestrator | Saturday 03 January 2026 00:44:24 +0000 (0:00:00.183) 0:00:31.100 ****** 2026-01-03 00:44:28.777550 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:28.777557 | orchestrator | 2026-01-03 00:44:28.777565 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:28.777573 | orchestrator | Saturday 03 January 2026 00:44:24 +0000 (0:00:00.472) 0:00:31.572 ****** 2026-01-03 00:44:28.777581 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:28.777589 | orchestrator | 2026-01-03 00:44:28.777597 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:28.777605 | orchestrator | Saturday 03 January 2026 00:44:25 +0000 (0:00:00.176) 0:00:31.749 ****** 2026-01-03 00:44:28.777628 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:28.777637 | orchestrator | 2026-01-03 00:44:28.777645 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-03 00:44:28.777653 | orchestrator | Saturday 03 January 2026 00:44:25 +0000 (0:00:00.175) 0:00:31.924 ****** 2026-01-03 00:44:28.777661 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:28.777669 | orchestrator | 2026-01-03 00:44:28.777677 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-03 00:44:28.777685 | orchestrator | Saturday 03 January 2026 00:44:25 +0000 (0:00:00.121) 0:00:32.046 ****** 2026-01-03 00:44:28.777693 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f97db499-9f50-5724-b4de-324784fab4ab'}}) 2026-01-03 00:44:28.777701 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '293f14c0-405b-5b3a-a5c8-f3b182003048'}}) 2026-01-03 00:44:28.777709 | orchestrator | 2026-01-03 00:44:28.777717 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-03 00:44:28.777725 | orchestrator | Saturday 03 January 2026 00:44:25 +0000 (0:00:00.172) 0:00:32.218 ****** 2026-01-03 00:44:28.777733 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f97db499-9f50-5724-b4de-324784fab4ab', 'data_vg': 'ceph-f97db499-9f50-5724-b4de-324784fab4ab'}) 2026-01-03 00:44:28.777742 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048', 'data_vg': 'ceph-293f14c0-405b-5b3a-a5c8-f3b182003048'}) 2026-01-03 00:44:28.777750 | orchestrator | 2026-01-03 00:44:28.777758 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-03 00:44:28.777766 | orchestrator | Saturday 03 January 2026 00:44:27 +0000 (0:00:01.773) 0:00:33.992 ****** 2026-01-03 00:44:28.777774 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f97db499-9f50-5724-b4de-324784fab4ab', 'data_vg': 'ceph-f97db499-9f50-5724-b4de-324784fab4ab'})  2026-01-03 00:44:28.777783 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048', 'data_vg': 'ceph-293f14c0-405b-5b3a-a5c8-f3b182003048'})  2026-01-03 00:44:28.777791 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:28.777799 | orchestrator | 2026-01-03 00:44:28.777807 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-03 00:44:28.777815 | orchestrator | Saturday 03 January 2026 00:44:27 +0000 (0:00:00.148) 0:00:34.141 ****** 2026-01-03 00:44:28.777823 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f97db499-9f50-5724-b4de-324784fab4ab', 'data_vg': 'ceph-f97db499-9f50-5724-b4de-324784fab4ab'}) 2026-01-03 00:44:28.777836 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048', 'data_vg': 'ceph-293f14c0-405b-5b3a-a5c8-f3b182003048'}) 2026-01-03 00:44:34.160251 | orchestrator | 2026-01-03 00:44:34.160367 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-03 00:44:34.160381 | orchestrator | Saturday 03 January 2026 00:44:28 +0000 (0:00:01.323) 0:00:35.465 ****** 2026-01-03 00:44:34.160406 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f97db499-9f50-5724-b4de-324784fab4ab', 'data_vg': 'ceph-f97db499-9f50-5724-b4de-324784fab4ab'})  2026-01-03 00:44:34.160417 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048', 'data_vg': 'ceph-293f14c0-405b-5b3a-a5c8-f3b182003048'})  2026-01-03 00:44:34.160426 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:34.160435 | orchestrator | 2026-01-03 00:44:34.160444 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-03 00:44:34.160452 | orchestrator | Saturday 03 January 2026 00:44:28 +0000 (0:00:00.154) 0:00:35.619 ****** 2026-01-03 00:44:34.160461 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:34.160469 | orchestrator | 2026-01-03 00:44:34.160478 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-03 00:44:34.160486 | orchestrator | Saturday 03 January 2026 00:44:29 +0000 (0:00:00.116) 0:00:35.736 ****** 2026-01-03 00:44:34.160494 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f97db499-9f50-5724-b4de-324784fab4ab', 'data_vg': 'ceph-f97db499-9f50-5724-b4de-324784fab4ab'})  2026-01-03 00:44:34.160502 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048', 'data_vg': 'ceph-293f14c0-405b-5b3a-a5c8-f3b182003048'})  2026-01-03 00:44:34.160511 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:34.160519 | orchestrator | 2026-01-03 00:44:34.160527 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-03 00:44:34.160535 | orchestrator | Saturday 03 January 2026 00:44:29 +0000 (0:00:00.147) 0:00:35.884 ****** 2026-01-03 00:44:34.160543 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:34.160551 | orchestrator | 2026-01-03 00:44:34.160559 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-03 00:44:34.160567 | orchestrator | Saturday 03 January 2026 00:44:29 +0000 (0:00:00.108) 0:00:35.993 ****** 2026-01-03 00:44:34.160576 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f97db499-9f50-5724-b4de-324784fab4ab', 'data_vg': 'ceph-f97db499-9f50-5724-b4de-324784fab4ab'})  2026-01-03 00:44:34.160584 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048', 'data_vg': 'ceph-293f14c0-405b-5b3a-a5c8-f3b182003048'})  2026-01-03 00:44:34.160592 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:34.160600 | orchestrator | 2026-01-03 00:44:34.160608 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-03 00:44:34.160680 | orchestrator | Saturday 03 January 2026 00:44:29 +0000 (0:00:00.253) 0:00:36.246 ****** 2026-01-03 00:44:34.160690 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:34.160698 | orchestrator | 2026-01-03 00:44:34.160706 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-03 00:44:34.160715 | orchestrator | Saturday 03 January 2026 00:44:29 +0000 (0:00:00.134) 0:00:36.381 ****** 2026-01-03 00:44:34.160723 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f97db499-9f50-5724-b4de-324784fab4ab', 'data_vg': 'ceph-f97db499-9f50-5724-b4de-324784fab4ab'})  2026-01-03 00:44:34.160731 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048', 'data_vg': 'ceph-293f14c0-405b-5b3a-a5c8-f3b182003048'})  2026-01-03 00:44:34.160740 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:34.160748 | orchestrator | 2026-01-03 00:44:34.160756 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-03 00:44:34.160764 | orchestrator | Saturday 03 January 2026 00:44:29 +0000 (0:00:00.153) 0:00:36.534 ****** 2026-01-03 00:44:34.160772 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:44:34.160800 | orchestrator | 2026-01-03 00:44:34.160811 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-03 00:44:34.160821 | orchestrator | Saturday 03 January 2026 00:44:29 +0000 (0:00:00.132) 0:00:36.667 ****** 2026-01-03 00:44:34.160830 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f97db499-9f50-5724-b4de-324784fab4ab', 'data_vg': 'ceph-f97db499-9f50-5724-b4de-324784fab4ab'})  2026-01-03 00:44:34.160840 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048', 'data_vg': 'ceph-293f14c0-405b-5b3a-a5c8-f3b182003048'})  2026-01-03 00:44:34.160849 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:34.160859 | orchestrator | 2026-01-03 00:44:34.160868 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-03 00:44:34.160877 | orchestrator | Saturday 03 January 2026 00:44:30 +0000 (0:00:00.138) 0:00:36.806 ****** 2026-01-03 00:44:34.160886 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f97db499-9f50-5724-b4de-324784fab4ab', 'data_vg': 'ceph-f97db499-9f50-5724-b4de-324784fab4ab'})  2026-01-03 00:44:34.160896 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048', 'data_vg': 'ceph-293f14c0-405b-5b3a-a5c8-f3b182003048'})  2026-01-03 00:44:34.160905 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:34.160915 | orchestrator | 2026-01-03 00:44:34.160924 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-03 00:44:34.160949 | orchestrator | Saturday 03 January 2026 00:44:30 +0000 (0:00:00.141) 0:00:36.947 ****** 2026-01-03 00:44:34.160959 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f97db499-9f50-5724-b4de-324784fab4ab', 'data_vg': 'ceph-f97db499-9f50-5724-b4de-324784fab4ab'})  2026-01-03 00:44:34.160969 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048', 'data_vg': 'ceph-293f14c0-405b-5b3a-a5c8-f3b182003048'})  2026-01-03 00:44:34.160978 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:34.160987 | orchestrator | 2026-01-03 00:44:34.160996 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-03 00:44:34.161006 | orchestrator | Saturday 03 January 2026 00:44:30 +0000 (0:00:00.146) 0:00:37.094 ****** 2026-01-03 00:44:34.161015 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:34.161025 | orchestrator | 2026-01-03 00:44:34.161034 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-03 00:44:34.161043 | orchestrator | Saturday 03 January 2026 00:44:30 +0000 (0:00:00.136) 0:00:37.231 ****** 2026-01-03 00:44:34.161052 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:34.161062 | orchestrator | 2026-01-03 00:44:34.161072 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-03 00:44:34.161081 | orchestrator | Saturday 03 January 2026 00:44:30 +0000 (0:00:00.136) 0:00:37.368 ****** 2026-01-03 00:44:34.161091 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:34.161100 | orchestrator | 2026-01-03 00:44:34.161109 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-03 00:44:34.161118 | orchestrator | Saturday 03 January 2026 00:44:30 +0000 (0:00:00.133) 0:00:37.502 ****** 2026-01-03 00:44:34.161128 | orchestrator | ok: [testbed-node-4] => { 2026-01-03 00:44:34.161138 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-03 00:44:34.161147 | orchestrator | } 2026-01-03 00:44:34.161157 | orchestrator | 2026-01-03 00:44:34.161165 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-03 00:44:34.161173 | orchestrator | Saturday 03 January 2026 00:44:30 +0000 (0:00:00.146) 0:00:37.649 ****** 2026-01-03 00:44:34.161181 | orchestrator | ok: [testbed-node-4] => { 2026-01-03 00:44:34.161190 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-03 00:44:34.161198 | orchestrator | } 2026-01-03 00:44:34.161206 | orchestrator | 2026-01-03 00:44:34.161214 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-03 00:44:34.161222 | orchestrator | Saturday 03 January 2026 00:44:31 +0000 (0:00:00.169) 0:00:37.818 ****** 2026-01-03 00:44:34.161236 | orchestrator | ok: [testbed-node-4] => { 2026-01-03 00:44:34.161244 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-03 00:44:34.161252 | orchestrator | } 2026-01-03 00:44:34.161260 | orchestrator | 2026-01-03 00:44:34.161268 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-03 00:44:34.161276 | orchestrator | Saturday 03 January 2026 00:44:31 +0000 (0:00:00.340) 0:00:38.159 ****** 2026-01-03 00:44:34.161284 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:44:34.161292 | orchestrator | 2026-01-03 00:44:34.161300 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-03 00:44:34.161312 | orchestrator | Saturday 03 January 2026 00:44:32 +0000 (0:00:00.551) 0:00:38.710 ****** 2026-01-03 00:44:34.161321 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:44:34.161329 | orchestrator | 2026-01-03 00:44:34.161337 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-03 00:44:34.161345 | orchestrator | Saturday 03 January 2026 00:44:32 +0000 (0:00:00.546) 0:00:39.257 ****** 2026-01-03 00:44:34.161353 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:44:34.161361 | orchestrator | 2026-01-03 00:44:34.161369 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-03 00:44:34.161377 | orchestrator | Saturday 03 January 2026 00:44:33 +0000 (0:00:00.521) 0:00:39.779 ****** 2026-01-03 00:44:34.161385 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:44:34.161393 | orchestrator | 2026-01-03 00:44:34.161401 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-03 00:44:34.161409 | orchestrator | Saturday 03 January 2026 00:44:33 +0000 (0:00:00.147) 0:00:39.926 ****** 2026-01-03 00:44:34.161417 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:34.161425 | orchestrator | 2026-01-03 00:44:34.161433 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-03 00:44:34.161441 | orchestrator | Saturday 03 January 2026 00:44:33 +0000 (0:00:00.117) 0:00:40.043 ****** 2026-01-03 00:44:34.161449 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:34.161457 | orchestrator | 2026-01-03 00:44:34.161465 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-03 00:44:34.161473 | orchestrator | Saturday 03 January 2026 00:44:33 +0000 (0:00:00.120) 0:00:40.164 ****** 2026-01-03 00:44:34.161481 | orchestrator | ok: [testbed-node-4] => { 2026-01-03 00:44:34.161489 | orchestrator |  "vgs_report": { 2026-01-03 00:44:34.161497 | orchestrator |  "vg": [] 2026-01-03 00:44:34.161505 | orchestrator |  } 2026-01-03 00:44:34.161513 | orchestrator | } 2026-01-03 00:44:34.161522 | orchestrator | 2026-01-03 00:44:34.161530 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-03 00:44:34.161538 | orchestrator | Saturday 03 January 2026 00:44:33 +0000 (0:00:00.139) 0:00:40.304 ****** 2026-01-03 00:44:34.161546 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:34.161554 | orchestrator | 2026-01-03 00:44:34.161562 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-03 00:44:34.161570 | orchestrator | Saturday 03 January 2026 00:44:33 +0000 (0:00:00.142) 0:00:40.446 ****** 2026-01-03 00:44:34.161578 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:34.161586 | orchestrator | 2026-01-03 00:44:34.161594 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-03 00:44:34.161602 | orchestrator | Saturday 03 January 2026 00:44:33 +0000 (0:00:00.136) 0:00:40.582 ****** 2026-01-03 00:44:34.161630 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:34.161640 | orchestrator | 2026-01-03 00:44:34.161648 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-03 00:44:34.161656 | orchestrator | Saturday 03 January 2026 00:44:34 +0000 (0:00:00.134) 0:00:40.717 ****** 2026-01-03 00:44:34.161664 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:34.161672 | orchestrator | 2026-01-03 00:44:34.161686 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-03 00:44:39.097577 | orchestrator | Saturday 03 January 2026 00:44:34 +0000 (0:00:00.130) 0:00:40.847 ****** 2026-01-03 00:44:39.097702 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:39.097710 | orchestrator | 2026-01-03 00:44:39.097715 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-03 00:44:39.097720 | orchestrator | Saturday 03 January 2026 00:44:34 +0000 (0:00:00.351) 0:00:41.199 ****** 2026-01-03 00:44:39.097724 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:39.097728 | orchestrator | 2026-01-03 00:44:39.097732 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-03 00:44:39.097736 | orchestrator | Saturday 03 January 2026 00:44:34 +0000 (0:00:00.149) 0:00:41.348 ****** 2026-01-03 00:44:39.097741 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:39.097744 | orchestrator | 2026-01-03 00:44:39.097748 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-03 00:44:39.097752 | orchestrator | Saturday 03 January 2026 00:44:34 +0000 (0:00:00.141) 0:00:41.490 ****** 2026-01-03 00:44:39.097756 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:39.097760 | orchestrator | 2026-01-03 00:44:39.097763 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-03 00:44:39.097767 | orchestrator | Saturday 03 January 2026 00:44:34 +0000 (0:00:00.136) 0:00:41.626 ****** 2026-01-03 00:44:39.097771 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:39.097775 | orchestrator | 2026-01-03 00:44:39.097779 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-03 00:44:39.097782 | orchestrator | Saturday 03 January 2026 00:44:35 +0000 (0:00:00.135) 0:00:41.762 ****** 2026-01-03 00:44:39.097786 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:39.097790 | orchestrator | 2026-01-03 00:44:39.097794 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-03 00:44:39.097798 | orchestrator | Saturday 03 January 2026 00:44:35 +0000 (0:00:00.147) 0:00:41.909 ****** 2026-01-03 00:44:39.097801 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:39.097805 | orchestrator | 2026-01-03 00:44:39.097809 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-03 00:44:39.097813 | orchestrator | Saturday 03 January 2026 00:44:35 +0000 (0:00:00.140) 0:00:42.050 ****** 2026-01-03 00:44:39.097816 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:39.097820 | orchestrator | 2026-01-03 00:44:39.097824 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-03 00:44:39.097828 | orchestrator | Saturday 03 January 2026 00:44:35 +0000 (0:00:00.142) 0:00:42.192 ****** 2026-01-03 00:44:39.097832 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:39.097835 | orchestrator | 2026-01-03 00:44:39.097839 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-03 00:44:39.097843 | orchestrator | Saturday 03 January 2026 00:44:35 +0000 (0:00:00.139) 0:00:42.331 ****** 2026-01-03 00:44:39.097847 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:39.097851 | orchestrator | 2026-01-03 00:44:39.097855 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-03 00:44:39.097859 | orchestrator | Saturday 03 January 2026 00:44:35 +0000 (0:00:00.136) 0:00:42.468 ****** 2026-01-03 00:44:39.097864 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f97db499-9f50-5724-b4de-324784fab4ab', 'data_vg': 'ceph-f97db499-9f50-5724-b4de-324784fab4ab'})  2026-01-03 00:44:39.097870 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048', 'data_vg': 'ceph-293f14c0-405b-5b3a-a5c8-f3b182003048'})  2026-01-03 00:44:39.097874 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:39.097878 | orchestrator | 2026-01-03 00:44:39.097882 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-03 00:44:39.097886 | orchestrator | Saturday 03 January 2026 00:44:35 +0000 (0:00:00.178) 0:00:42.646 ****** 2026-01-03 00:44:39.097890 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f97db499-9f50-5724-b4de-324784fab4ab', 'data_vg': 'ceph-f97db499-9f50-5724-b4de-324784fab4ab'})  2026-01-03 00:44:39.097897 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048', 'data_vg': 'ceph-293f14c0-405b-5b3a-a5c8-f3b182003048'})  2026-01-03 00:44:39.097901 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:39.097905 | orchestrator | 2026-01-03 00:44:39.097908 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-03 00:44:39.097912 | orchestrator | Saturday 03 January 2026 00:44:36 +0000 (0:00:00.149) 0:00:42.795 ****** 2026-01-03 00:44:39.097916 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f97db499-9f50-5724-b4de-324784fab4ab', 'data_vg': 'ceph-f97db499-9f50-5724-b4de-324784fab4ab'})  2026-01-03 00:44:39.097920 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048', 'data_vg': 'ceph-293f14c0-405b-5b3a-a5c8-f3b182003048'})  2026-01-03 00:44:39.097924 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:39.097927 | orchestrator | 2026-01-03 00:44:39.097931 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-03 00:44:39.097935 | orchestrator | Saturday 03 January 2026 00:44:36 +0000 (0:00:00.345) 0:00:43.141 ****** 2026-01-03 00:44:39.097939 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f97db499-9f50-5724-b4de-324784fab4ab', 'data_vg': 'ceph-f97db499-9f50-5724-b4de-324784fab4ab'})  2026-01-03 00:44:39.097943 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048', 'data_vg': 'ceph-293f14c0-405b-5b3a-a5c8-f3b182003048'})  2026-01-03 00:44:39.097947 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:39.097950 | orchestrator | 2026-01-03 00:44:39.097966 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-03 00:44:39.097970 | orchestrator | Saturday 03 January 2026 00:44:36 +0000 (0:00:00.156) 0:00:43.298 ****** 2026-01-03 00:44:39.097974 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f97db499-9f50-5724-b4de-324784fab4ab', 'data_vg': 'ceph-f97db499-9f50-5724-b4de-324784fab4ab'})  2026-01-03 00:44:39.097978 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048', 'data_vg': 'ceph-293f14c0-405b-5b3a-a5c8-f3b182003048'})  2026-01-03 00:44:39.097982 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:39.097986 | orchestrator | 2026-01-03 00:44:39.097989 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-03 00:44:39.097993 | orchestrator | Saturday 03 January 2026 00:44:36 +0000 (0:00:00.147) 0:00:43.446 ****** 2026-01-03 00:44:39.097998 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f97db499-9f50-5724-b4de-324784fab4ab', 'data_vg': 'ceph-f97db499-9f50-5724-b4de-324784fab4ab'})  2026-01-03 00:44:39.098001 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048', 'data_vg': 'ceph-293f14c0-405b-5b3a-a5c8-f3b182003048'})  2026-01-03 00:44:39.098005 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:39.098069 | orchestrator | 2026-01-03 00:44:39.098076 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-03 00:44:39.098082 | orchestrator | Saturday 03 January 2026 00:44:36 +0000 (0:00:00.151) 0:00:43.597 ****** 2026-01-03 00:44:39.098129 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f97db499-9f50-5724-b4de-324784fab4ab', 'data_vg': 'ceph-f97db499-9f50-5724-b4de-324784fab4ab'})  2026-01-03 00:44:39.098137 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048', 'data_vg': 'ceph-293f14c0-405b-5b3a-a5c8-f3b182003048'})  2026-01-03 00:44:39.098144 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:39.098151 | orchestrator | 2026-01-03 00:44:39.098158 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-03 00:44:39.098166 | orchestrator | Saturday 03 January 2026 00:44:37 +0000 (0:00:00.170) 0:00:43.768 ****** 2026-01-03 00:44:39.098176 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f97db499-9f50-5724-b4de-324784fab4ab', 'data_vg': 'ceph-f97db499-9f50-5724-b4de-324784fab4ab'})  2026-01-03 00:44:39.098183 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048', 'data_vg': 'ceph-293f14c0-405b-5b3a-a5c8-f3b182003048'})  2026-01-03 00:44:39.098188 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:39.098192 | orchestrator | 2026-01-03 00:44:39.098197 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-03 00:44:39.098201 | orchestrator | Saturday 03 January 2026 00:44:37 +0000 (0:00:00.162) 0:00:43.930 ****** 2026-01-03 00:44:39.098206 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:44:39.098210 | orchestrator | 2026-01-03 00:44:39.098215 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-03 00:44:39.098219 | orchestrator | Saturday 03 January 2026 00:44:37 +0000 (0:00:00.591) 0:00:44.521 ****** 2026-01-03 00:44:39.098224 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:44:39.098228 | orchestrator | 2026-01-03 00:44:39.098233 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-03 00:44:39.098237 | orchestrator | Saturday 03 January 2026 00:44:38 +0000 (0:00:00.609) 0:00:45.131 ****** 2026-01-03 00:44:39.098242 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:44:39.098247 | orchestrator | 2026-01-03 00:44:39.098251 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-03 00:44:39.098255 | orchestrator | Saturday 03 January 2026 00:44:38 +0000 (0:00:00.143) 0:00:45.274 ****** 2026-01-03 00:44:39.098260 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048', 'vg_name': 'ceph-293f14c0-405b-5b3a-a5c8-f3b182003048'}) 2026-01-03 00:44:39.098266 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-f97db499-9f50-5724-b4de-324784fab4ab', 'vg_name': 'ceph-f97db499-9f50-5724-b4de-324784fab4ab'}) 2026-01-03 00:44:39.098271 | orchestrator | 2026-01-03 00:44:39.098275 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-03 00:44:39.098280 | orchestrator | Saturday 03 January 2026 00:44:38 +0000 (0:00:00.170) 0:00:45.445 ****** 2026-01-03 00:44:39.098284 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f97db499-9f50-5724-b4de-324784fab4ab', 'data_vg': 'ceph-f97db499-9f50-5724-b4de-324784fab4ab'})  2026-01-03 00:44:39.098289 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048', 'data_vg': 'ceph-293f14c0-405b-5b3a-a5c8-f3b182003048'})  2026-01-03 00:44:39.098293 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:39.098298 | orchestrator | 2026-01-03 00:44:39.098302 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-03 00:44:39.098307 | orchestrator | Saturday 03 January 2026 00:44:38 +0000 (0:00:00.157) 0:00:45.602 ****** 2026-01-03 00:44:39.098311 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f97db499-9f50-5724-b4de-324784fab4ab', 'data_vg': 'ceph-f97db499-9f50-5724-b4de-324784fab4ab'})  2026-01-03 00:44:39.098321 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048', 'data_vg': 'ceph-293f14c0-405b-5b3a-a5c8-f3b182003048'})  2026-01-03 00:44:44.808416 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:44.808510 | orchestrator | 2026-01-03 00:44:44.808523 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-03 00:44:44.808532 | orchestrator | Saturday 03 January 2026 00:44:39 +0000 (0:00:00.182) 0:00:45.784 ****** 2026-01-03 00:44:44.808539 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f97db499-9f50-5724-b4de-324784fab4ab', 'data_vg': 'ceph-f97db499-9f50-5724-b4de-324784fab4ab'})  2026-01-03 00:44:44.808548 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048', 'data_vg': 'ceph-293f14c0-405b-5b3a-a5c8-f3b182003048'})  2026-01-03 00:44:44.808555 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:44:44.808585 | orchestrator | 2026-01-03 00:44:44.808592 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-03 00:44:44.808600 | orchestrator | Saturday 03 January 2026 00:44:39 +0000 (0:00:00.156) 0:00:45.941 ****** 2026-01-03 00:44:44.808640 | orchestrator | ok: [testbed-node-4] => { 2026-01-03 00:44:44.808649 | orchestrator |  "lvm_report": { 2026-01-03 00:44:44.808657 | orchestrator |  "lv": [ 2026-01-03 00:44:44.808664 | orchestrator |  { 2026-01-03 00:44:44.808670 | orchestrator |  "lv_name": "osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048", 2026-01-03 00:44:44.808678 | orchestrator |  "vg_name": "ceph-293f14c0-405b-5b3a-a5c8-f3b182003048" 2026-01-03 00:44:44.808685 | orchestrator |  }, 2026-01-03 00:44:44.808691 | orchestrator |  { 2026-01-03 00:44:44.808698 | orchestrator |  "lv_name": "osd-block-f97db499-9f50-5724-b4de-324784fab4ab", 2026-01-03 00:44:44.808721 | orchestrator |  "vg_name": "ceph-f97db499-9f50-5724-b4de-324784fab4ab" 2026-01-03 00:44:44.808728 | orchestrator |  } 2026-01-03 00:44:44.808735 | orchestrator |  ], 2026-01-03 00:44:44.808741 | orchestrator |  "pv": [ 2026-01-03 00:44:44.808748 | orchestrator |  { 2026-01-03 00:44:44.808755 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-03 00:44:44.808762 | orchestrator |  "vg_name": "ceph-f97db499-9f50-5724-b4de-324784fab4ab" 2026-01-03 00:44:44.808773 | orchestrator |  }, 2026-01-03 00:44:44.808784 | orchestrator |  { 2026-01-03 00:44:44.808795 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-03 00:44:44.808805 | orchestrator |  "vg_name": "ceph-293f14c0-405b-5b3a-a5c8-f3b182003048" 2026-01-03 00:44:44.808816 | orchestrator |  } 2026-01-03 00:44:44.808825 | orchestrator |  ] 2026-01-03 00:44:44.808835 | orchestrator |  } 2026-01-03 00:44:44.808845 | orchestrator | } 2026-01-03 00:44:44.808856 | orchestrator | 2026-01-03 00:44:44.808866 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-03 00:44:44.808876 | orchestrator | 2026-01-03 00:44:44.808887 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-03 00:44:44.808913 | orchestrator | Saturday 03 January 2026 00:44:39 +0000 (0:00:00.471) 0:00:46.412 ****** 2026-01-03 00:44:44.808926 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-03 00:44:44.808937 | orchestrator | 2026-01-03 00:44:44.808950 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-03 00:44:44.808959 | orchestrator | Saturday 03 January 2026 00:44:39 +0000 (0:00:00.269) 0:00:46.681 ****** 2026-01-03 00:44:44.808966 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:44:44.808974 | orchestrator | 2026-01-03 00:44:44.808982 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:44.808990 | orchestrator | Saturday 03 January 2026 00:44:40 +0000 (0:00:00.256) 0:00:46.938 ****** 2026-01-03 00:44:44.808998 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-03 00:44:44.809006 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-03 00:44:44.809014 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-03 00:44:44.809021 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-03 00:44:44.809029 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-03 00:44:44.809037 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-03 00:44:44.809044 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-03 00:44:44.809052 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-03 00:44:44.809060 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-03 00:44:44.809078 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-03 00:44:44.809086 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-03 00:44:44.809093 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-03 00:44:44.809102 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-03 00:44:44.809110 | orchestrator | 2026-01-03 00:44:44.809121 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:44.809128 | orchestrator | Saturday 03 January 2026 00:44:40 +0000 (0:00:00.421) 0:00:47.359 ****** 2026-01-03 00:44:44.809137 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:44.809145 | orchestrator | 2026-01-03 00:44:44.809153 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:44.809161 | orchestrator | Saturday 03 January 2026 00:44:40 +0000 (0:00:00.219) 0:00:47.579 ****** 2026-01-03 00:44:44.809169 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:44.809176 | orchestrator | 2026-01-03 00:44:44.809185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:44.809207 | orchestrator | Saturday 03 January 2026 00:44:41 +0000 (0:00:00.200) 0:00:47.779 ****** 2026-01-03 00:44:44.809215 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:44.809223 | orchestrator | 2026-01-03 00:44:44.809231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:44.809239 | orchestrator | Saturday 03 January 2026 00:44:41 +0000 (0:00:00.200) 0:00:47.979 ****** 2026-01-03 00:44:44.809247 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:44.809255 | orchestrator | 2026-01-03 00:44:44.809262 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:44.809271 | orchestrator | Saturday 03 January 2026 00:44:41 +0000 (0:00:00.192) 0:00:48.172 ****** 2026-01-03 00:44:44.809279 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:44.809287 | orchestrator | 2026-01-03 00:44:44.809294 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:44.809302 | orchestrator | Saturday 03 January 2026 00:44:42 +0000 (0:00:00.558) 0:00:48.730 ****** 2026-01-03 00:44:44.809310 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:44.809318 | orchestrator | 2026-01-03 00:44:44.809326 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:44.809333 | orchestrator | Saturday 03 January 2026 00:44:42 +0000 (0:00:00.180) 0:00:48.911 ****** 2026-01-03 00:44:44.809340 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:44.809347 | orchestrator | 2026-01-03 00:44:44.809354 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:44.809360 | orchestrator | Saturday 03 January 2026 00:44:42 +0000 (0:00:00.191) 0:00:49.103 ****** 2026-01-03 00:44:44.809367 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:44.809374 | orchestrator | 2026-01-03 00:44:44.809381 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:44.809387 | orchestrator | Saturday 03 January 2026 00:44:42 +0000 (0:00:00.173) 0:00:49.277 ****** 2026-01-03 00:44:44.809394 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca) 2026-01-03 00:44:44.809403 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca) 2026-01-03 00:44:44.809409 | orchestrator | 2026-01-03 00:44:44.809416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:44.809423 | orchestrator | Saturday 03 January 2026 00:44:42 +0000 (0:00:00.366) 0:00:49.643 ****** 2026-01-03 00:44:44.809430 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5710b3c7-cfac-4b3c-9149-eeb74f32a79f) 2026-01-03 00:44:44.809437 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5710b3c7-cfac-4b3c-9149-eeb74f32a79f) 2026-01-03 00:44:44.809444 | orchestrator | 2026-01-03 00:44:44.809455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:44.809466 | orchestrator | Saturday 03 January 2026 00:44:43 +0000 (0:00:00.403) 0:00:50.047 ****** 2026-01-03 00:44:44.809473 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d4e0be62-f642-4458-ac3b-093009378a3c) 2026-01-03 00:44:44.809480 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d4e0be62-f642-4458-ac3b-093009378a3c) 2026-01-03 00:44:44.809487 | orchestrator | 2026-01-03 00:44:44.809493 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:44.809500 | orchestrator | Saturday 03 January 2026 00:44:43 +0000 (0:00:00.375) 0:00:50.422 ****** 2026-01-03 00:44:44.809507 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_55b36885-901f-4155-8165-44f8903e4943) 2026-01-03 00:44:44.809514 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_55b36885-901f-4155-8165-44f8903e4943) 2026-01-03 00:44:44.809521 | orchestrator | 2026-01-03 00:44:44.809527 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-03 00:44:44.809534 | orchestrator | Saturday 03 January 2026 00:44:44 +0000 (0:00:00.391) 0:00:50.814 ****** 2026-01-03 00:44:44.809541 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-03 00:44:44.809548 | orchestrator | 2026-01-03 00:44:44.809555 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:44.809562 | orchestrator | Saturday 03 January 2026 00:44:44 +0000 (0:00:00.304) 0:00:51.118 ****** 2026-01-03 00:44:44.809568 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-03 00:44:44.809575 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-03 00:44:44.809581 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-03 00:44:44.809588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-03 00:44:44.809595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-03 00:44:44.809601 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-03 00:44:44.809630 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-03 00:44:44.809638 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-03 00:44:44.809644 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-03 00:44:44.809651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-03 00:44:44.809658 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-03 00:44:44.809669 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-03 00:44:53.422178 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-03 00:44:53.422284 | orchestrator | 2026-01-03 00:44:53.422304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:53.422329 | orchestrator | Saturday 03 January 2026 00:44:44 +0000 (0:00:00.371) 0:00:51.490 ****** 2026-01-03 00:44:53.422337 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:53.422345 | orchestrator | 2026-01-03 00:44:53.422352 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:53.422361 | orchestrator | Saturday 03 January 2026 00:44:44 +0000 (0:00:00.181) 0:00:51.671 ****** 2026-01-03 00:44:53.422366 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:53.422371 | orchestrator | 2026-01-03 00:44:53.422375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:53.422380 | orchestrator | Saturday 03 January 2026 00:44:45 +0000 (0:00:00.504) 0:00:52.176 ****** 2026-01-03 00:44:53.422401 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:53.422406 | orchestrator | 2026-01-03 00:44:53.422410 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:53.422414 | orchestrator | Saturday 03 January 2026 00:44:45 +0000 (0:00:00.205) 0:00:52.382 ****** 2026-01-03 00:44:53.422419 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:53.422423 | orchestrator | 2026-01-03 00:44:53.422427 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:53.422431 | orchestrator | Saturday 03 January 2026 00:44:45 +0000 (0:00:00.202) 0:00:52.584 ****** 2026-01-03 00:44:53.422436 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:53.422440 | orchestrator | 2026-01-03 00:44:53.422444 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:53.422448 | orchestrator | Saturday 03 January 2026 00:44:46 +0000 (0:00:00.247) 0:00:52.831 ****** 2026-01-03 00:44:53.422452 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:53.422456 | orchestrator | 2026-01-03 00:44:53.422460 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:53.422465 | orchestrator | Saturday 03 January 2026 00:44:46 +0000 (0:00:00.243) 0:00:53.075 ****** 2026-01-03 00:44:53.422469 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:53.422473 | orchestrator | 2026-01-03 00:44:53.422477 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:53.422481 | orchestrator | Saturday 03 January 2026 00:44:46 +0000 (0:00:00.201) 0:00:53.277 ****** 2026-01-03 00:44:53.422485 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:53.422489 | orchestrator | 2026-01-03 00:44:53.422494 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:53.422498 | orchestrator | Saturday 03 January 2026 00:44:46 +0000 (0:00:00.189) 0:00:53.466 ****** 2026-01-03 00:44:53.422502 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-03 00:44:53.422507 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-03 00:44:53.422512 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-03 00:44:53.422516 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-03 00:44:53.422520 | orchestrator | 2026-01-03 00:44:53.422525 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:53.422529 | orchestrator | Saturday 03 January 2026 00:44:47 +0000 (0:00:00.654) 0:00:54.121 ****** 2026-01-03 00:44:53.422533 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:53.422551 | orchestrator | 2026-01-03 00:44:53.422556 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:53.422560 | orchestrator | Saturday 03 January 2026 00:44:47 +0000 (0:00:00.196) 0:00:54.317 ****** 2026-01-03 00:44:53.422565 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:53.422569 | orchestrator | 2026-01-03 00:44:53.422573 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:53.422577 | orchestrator | Saturday 03 January 2026 00:44:47 +0000 (0:00:00.204) 0:00:54.522 ****** 2026-01-03 00:44:53.422581 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:53.422585 | orchestrator | 2026-01-03 00:44:53.422589 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-03 00:44:53.422593 | orchestrator | Saturday 03 January 2026 00:44:48 +0000 (0:00:00.190) 0:00:54.713 ****** 2026-01-03 00:44:53.422598 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:53.422625 | orchestrator | 2026-01-03 00:44:53.422630 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-03 00:44:53.422634 | orchestrator | Saturday 03 January 2026 00:44:48 +0000 (0:00:00.196) 0:00:54.910 ****** 2026-01-03 00:44:53.422638 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:53.422643 | orchestrator | 2026-01-03 00:44:53.422647 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-03 00:44:53.422651 | orchestrator | Saturday 03 January 2026 00:44:48 +0000 (0:00:00.307) 0:00:55.217 ****** 2026-01-03 00:44:53.422655 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '124077fc-a709-5275-a3b4-8defea20aa20'}}) 2026-01-03 00:44:53.422664 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '43153f84-c643-5017-9328-2bdcf330b780'}}) 2026-01-03 00:44:53.422668 | orchestrator | 2026-01-03 00:44:53.422672 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-03 00:44:53.422676 | orchestrator | Saturday 03 January 2026 00:44:48 +0000 (0:00:00.203) 0:00:55.421 ****** 2026-01-03 00:44:53.422682 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-124077fc-a709-5275-a3b4-8defea20aa20', 'data_vg': 'ceph-124077fc-a709-5275-a3b4-8defea20aa20'}) 2026-01-03 00:44:53.422700 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-43153f84-c643-5017-9328-2bdcf330b780', 'data_vg': 'ceph-43153f84-c643-5017-9328-2bdcf330b780'}) 2026-01-03 00:44:53.422705 | orchestrator | 2026-01-03 00:44:53.422709 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-03 00:44:53.422725 | orchestrator | Saturday 03 January 2026 00:44:50 +0000 (0:00:01.748) 0:00:57.169 ****** 2026-01-03 00:44:53.422729 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124077fc-a709-5275-a3b4-8defea20aa20', 'data_vg': 'ceph-124077fc-a709-5275-a3b4-8defea20aa20'})  2026-01-03 00:44:53.422735 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43153f84-c643-5017-9328-2bdcf330b780', 'data_vg': 'ceph-43153f84-c643-5017-9328-2bdcf330b780'})  2026-01-03 00:44:53.422740 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:53.422745 | orchestrator | 2026-01-03 00:44:53.422762 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-03 00:44:53.422768 | orchestrator | Saturday 03 January 2026 00:44:50 +0000 (0:00:00.154) 0:00:57.323 ****** 2026-01-03 00:44:53.422772 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-124077fc-a709-5275-a3b4-8defea20aa20', 'data_vg': 'ceph-124077fc-a709-5275-a3b4-8defea20aa20'}) 2026-01-03 00:44:53.422777 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-43153f84-c643-5017-9328-2bdcf330b780', 'data_vg': 'ceph-43153f84-c643-5017-9328-2bdcf330b780'}) 2026-01-03 00:44:53.422782 | orchestrator | 2026-01-03 00:44:53.422787 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-03 00:44:53.422792 | orchestrator | Saturday 03 January 2026 00:44:51 +0000 (0:00:01.257) 0:00:58.580 ****** 2026-01-03 00:44:53.422796 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124077fc-a709-5275-a3b4-8defea20aa20', 'data_vg': 'ceph-124077fc-a709-5275-a3b4-8defea20aa20'})  2026-01-03 00:44:53.422801 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43153f84-c643-5017-9328-2bdcf330b780', 'data_vg': 'ceph-43153f84-c643-5017-9328-2bdcf330b780'})  2026-01-03 00:44:53.422806 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:53.422811 | orchestrator | 2026-01-03 00:44:53.422816 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-03 00:44:53.422821 | orchestrator | Saturday 03 January 2026 00:44:52 +0000 (0:00:00.161) 0:00:58.742 ****** 2026-01-03 00:44:53.422825 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:53.422830 | orchestrator | 2026-01-03 00:44:53.422835 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-03 00:44:53.422840 | orchestrator | Saturday 03 January 2026 00:44:52 +0000 (0:00:00.143) 0:00:58.886 ****** 2026-01-03 00:44:53.422847 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124077fc-a709-5275-a3b4-8defea20aa20', 'data_vg': 'ceph-124077fc-a709-5275-a3b4-8defea20aa20'})  2026-01-03 00:44:53.422852 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43153f84-c643-5017-9328-2bdcf330b780', 'data_vg': 'ceph-43153f84-c643-5017-9328-2bdcf330b780'})  2026-01-03 00:44:53.422857 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:53.422862 | orchestrator | 2026-01-03 00:44:53.422867 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-03 00:44:53.422882 | orchestrator | Saturday 03 January 2026 00:44:52 +0000 (0:00:00.164) 0:00:59.051 ****** 2026-01-03 00:44:53.422886 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:53.422891 | orchestrator | 2026-01-03 00:44:53.422895 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-03 00:44:53.422899 | orchestrator | Saturday 03 January 2026 00:44:52 +0000 (0:00:00.149) 0:00:59.200 ****** 2026-01-03 00:44:53.422903 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124077fc-a709-5275-a3b4-8defea20aa20', 'data_vg': 'ceph-124077fc-a709-5275-a3b4-8defea20aa20'})  2026-01-03 00:44:53.422907 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43153f84-c643-5017-9328-2bdcf330b780', 'data_vg': 'ceph-43153f84-c643-5017-9328-2bdcf330b780'})  2026-01-03 00:44:53.422911 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:53.422915 | orchestrator | 2026-01-03 00:44:53.422920 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-03 00:44:53.422924 | orchestrator | Saturday 03 January 2026 00:44:52 +0000 (0:00:00.136) 0:00:59.337 ****** 2026-01-03 00:44:53.422928 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:53.422932 | orchestrator | 2026-01-03 00:44:53.422936 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-03 00:44:53.422940 | orchestrator | Saturday 03 January 2026 00:44:52 +0000 (0:00:00.135) 0:00:59.472 ****** 2026-01-03 00:44:53.422944 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124077fc-a709-5275-a3b4-8defea20aa20', 'data_vg': 'ceph-124077fc-a709-5275-a3b4-8defea20aa20'})  2026-01-03 00:44:53.422948 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43153f84-c643-5017-9328-2bdcf330b780', 'data_vg': 'ceph-43153f84-c643-5017-9328-2bdcf330b780'})  2026-01-03 00:44:53.422953 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:53.422957 | orchestrator | 2026-01-03 00:44:53.422961 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-03 00:44:53.422965 | orchestrator | Saturday 03 January 2026 00:44:52 +0000 (0:00:00.148) 0:00:59.621 ****** 2026-01-03 00:44:53.422969 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:44:53.422973 | orchestrator | 2026-01-03 00:44:53.422978 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-03 00:44:53.422982 | orchestrator | Saturday 03 January 2026 00:44:53 +0000 (0:00:00.337) 0:00:59.959 ****** 2026-01-03 00:44:53.423001 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124077fc-a709-5275-a3b4-8defea20aa20', 'data_vg': 'ceph-124077fc-a709-5275-a3b4-8defea20aa20'})  2026-01-03 00:44:59.155662 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43153f84-c643-5017-9328-2bdcf330b780', 'data_vg': 'ceph-43153f84-c643-5017-9328-2bdcf330b780'})  2026-01-03 00:44:59.155766 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:59.155781 | orchestrator | 2026-01-03 00:44:59.155795 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-03 00:44:59.155808 | orchestrator | Saturday 03 January 2026 00:44:53 +0000 (0:00:00.151) 0:01:00.110 ****** 2026-01-03 00:44:59.155820 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124077fc-a709-5275-a3b4-8defea20aa20', 'data_vg': 'ceph-124077fc-a709-5275-a3b4-8defea20aa20'})  2026-01-03 00:44:59.155834 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43153f84-c643-5017-9328-2bdcf330b780', 'data_vg': 'ceph-43153f84-c643-5017-9328-2bdcf330b780'})  2026-01-03 00:44:59.155852 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:59.155871 | orchestrator | 2026-01-03 00:44:59.155890 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-03 00:44:59.155907 | orchestrator | Saturday 03 January 2026 00:44:53 +0000 (0:00:00.146) 0:01:00.256 ****** 2026-01-03 00:44:59.155926 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124077fc-a709-5275-a3b4-8defea20aa20', 'data_vg': 'ceph-124077fc-a709-5275-a3b4-8defea20aa20'})  2026-01-03 00:44:59.155944 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43153f84-c643-5017-9328-2bdcf330b780', 'data_vg': 'ceph-43153f84-c643-5017-9328-2bdcf330b780'})  2026-01-03 00:44:59.155992 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:59.156008 | orchestrator | 2026-01-03 00:44:59.156024 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-03 00:44:59.156040 | orchestrator | Saturday 03 January 2026 00:44:53 +0000 (0:00:00.165) 0:01:00.422 ****** 2026-01-03 00:44:59.156058 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:59.156075 | orchestrator | 2026-01-03 00:44:59.156093 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-03 00:44:59.156111 | orchestrator | Saturday 03 January 2026 00:44:53 +0000 (0:00:00.158) 0:01:00.581 ****** 2026-01-03 00:44:59.156129 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:59.156147 | orchestrator | 2026-01-03 00:44:59.156166 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-03 00:44:59.156184 | orchestrator | Saturday 03 January 2026 00:44:54 +0000 (0:00:00.120) 0:01:00.701 ****** 2026-01-03 00:44:59.156204 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:59.156223 | orchestrator | 2026-01-03 00:44:59.156260 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-03 00:44:59.156280 | orchestrator | Saturday 03 January 2026 00:44:54 +0000 (0:00:00.144) 0:01:00.845 ****** 2026-01-03 00:44:59.156298 | orchestrator | ok: [testbed-node-5] => { 2026-01-03 00:44:59.156318 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-03 00:44:59.156339 | orchestrator | } 2026-01-03 00:44:59.156353 | orchestrator | 2026-01-03 00:44:59.156366 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-03 00:44:59.156378 | orchestrator | Saturday 03 January 2026 00:44:54 +0000 (0:00:00.138) 0:01:00.984 ****** 2026-01-03 00:44:59.156391 | orchestrator | ok: [testbed-node-5] => { 2026-01-03 00:44:59.156405 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-03 00:44:59.156420 | orchestrator | } 2026-01-03 00:44:59.156433 | orchestrator | 2026-01-03 00:44:59.156445 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-03 00:44:59.156459 | orchestrator | Saturday 03 January 2026 00:44:54 +0000 (0:00:00.146) 0:01:01.130 ****** 2026-01-03 00:44:59.156472 | orchestrator | ok: [testbed-node-5] => { 2026-01-03 00:44:59.156485 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-03 00:44:59.156498 | orchestrator | } 2026-01-03 00:44:59.156511 | orchestrator | 2026-01-03 00:44:59.156524 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-03 00:44:59.156537 | orchestrator | Saturday 03 January 2026 00:44:54 +0000 (0:00:00.153) 0:01:01.284 ****** 2026-01-03 00:44:59.156548 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:44:59.156560 | orchestrator | 2026-01-03 00:44:59.156571 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-03 00:44:59.156582 | orchestrator | Saturday 03 January 2026 00:44:55 +0000 (0:00:00.542) 0:01:01.826 ****** 2026-01-03 00:44:59.156593 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:44:59.156650 | orchestrator | 2026-01-03 00:44:59.156661 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-03 00:44:59.156672 | orchestrator | Saturday 03 January 2026 00:44:55 +0000 (0:00:00.531) 0:01:02.357 ****** 2026-01-03 00:44:59.156684 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:44:59.156695 | orchestrator | 2026-01-03 00:44:59.156706 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-03 00:44:59.156717 | orchestrator | Saturday 03 January 2026 00:44:56 +0000 (0:00:00.658) 0:01:03.015 ****** 2026-01-03 00:44:59.156728 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:44:59.156739 | orchestrator | 2026-01-03 00:44:59.156750 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-03 00:44:59.156761 | orchestrator | Saturday 03 January 2026 00:44:56 +0000 (0:00:00.141) 0:01:03.157 ****** 2026-01-03 00:44:59.156772 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:59.156783 | orchestrator | 2026-01-03 00:44:59.156795 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-03 00:44:59.156821 | orchestrator | Saturday 03 January 2026 00:44:56 +0000 (0:00:00.111) 0:01:03.268 ****** 2026-01-03 00:44:59.156832 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:59.156843 | orchestrator | 2026-01-03 00:44:59.156854 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-03 00:44:59.156865 | orchestrator | Saturday 03 January 2026 00:44:56 +0000 (0:00:00.101) 0:01:03.369 ****** 2026-01-03 00:44:59.156877 | orchestrator | ok: [testbed-node-5] => { 2026-01-03 00:44:59.156888 | orchestrator |  "vgs_report": { 2026-01-03 00:44:59.156900 | orchestrator |  "vg": [] 2026-01-03 00:44:59.156932 | orchestrator |  } 2026-01-03 00:44:59.156944 | orchestrator | } 2026-01-03 00:44:59.156955 | orchestrator | 2026-01-03 00:44:59.156967 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-03 00:44:59.156978 | orchestrator | Saturday 03 January 2026 00:44:56 +0000 (0:00:00.147) 0:01:03.517 ****** 2026-01-03 00:44:59.156989 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:59.157000 | orchestrator | 2026-01-03 00:44:59.157011 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-03 00:44:59.157023 | orchestrator | Saturday 03 January 2026 00:44:56 +0000 (0:00:00.130) 0:01:03.647 ****** 2026-01-03 00:44:59.157034 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:59.157045 | orchestrator | 2026-01-03 00:44:59.157056 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-03 00:44:59.157067 | orchestrator | Saturday 03 January 2026 00:44:57 +0000 (0:00:00.143) 0:01:03.790 ****** 2026-01-03 00:44:59.157079 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:59.157090 | orchestrator | 2026-01-03 00:44:59.157101 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-03 00:44:59.157112 | orchestrator | Saturday 03 January 2026 00:44:57 +0000 (0:00:00.153) 0:01:03.943 ****** 2026-01-03 00:44:59.157123 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:59.157135 | orchestrator | 2026-01-03 00:44:59.157146 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-03 00:44:59.157157 | orchestrator | Saturday 03 January 2026 00:44:57 +0000 (0:00:00.132) 0:01:04.076 ****** 2026-01-03 00:44:59.157168 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:59.157179 | orchestrator | 2026-01-03 00:44:59.157190 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-03 00:44:59.157201 | orchestrator | Saturday 03 January 2026 00:44:57 +0000 (0:00:00.111) 0:01:04.187 ****** 2026-01-03 00:44:59.157213 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:59.157224 | orchestrator | 2026-01-03 00:44:59.157235 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-03 00:44:59.157246 | orchestrator | Saturday 03 January 2026 00:44:57 +0000 (0:00:00.114) 0:01:04.302 ****** 2026-01-03 00:44:59.157257 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:59.157268 | orchestrator | 2026-01-03 00:44:59.157279 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-03 00:44:59.157290 | orchestrator | Saturday 03 January 2026 00:44:57 +0000 (0:00:00.112) 0:01:04.415 ****** 2026-01-03 00:44:59.157302 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:59.157313 | orchestrator | 2026-01-03 00:44:59.157324 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-03 00:44:59.157335 | orchestrator | Saturday 03 January 2026 00:44:57 +0000 (0:00:00.237) 0:01:04.652 ****** 2026-01-03 00:44:59.157346 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:59.157357 | orchestrator | 2026-01-03 00:44:59.157375 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-03 00:44:59.157387 | orchestrator | Saturday 03 January 2026 00:44:58 +0000 (0:00:00.118) 0:01:04.771 ****** 2026-01-03 00:44:59.157398 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:59.157409 | orchestrator | 2026-01-03 00:44:59.157420 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-03 00:44:59.157439 | orchestrator | Saturday 03 January 2026 00:44:58 +0000 (0:00:00.126) 0:01:04.897 ****** 2026-01-03 00:44:59.157451 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:59.157461 | orchestrator | 2026-01-03 00:44:59.157473 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-03 00:44:59.157484 | orchestrator | Saturday 03 January 2026 00:44:58 +0000 (0:00:00.115) 0:01:05.013 ****** 2026-01-03 00:44:59.157495 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:59.157506 | orchestrator | 2026-01-03 00:44:59.157517 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-03 00:44:59.157528 | orchestrator | Saturday 03 January 2026 00:44:58 +0000 (0:00:00.127) 0:01:05.140 ****** 2026-01-03 00:44:59.157540 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:59.157551 | orchestrator | 2026-01-03 00:44:59.157562 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-03 00:44:59.157573 | orchestrator | Saturday 03 January 2026 00:44:58 +0000 (0:00:00.140) 0:01:05.281 ****** 2026-01-03 00:44:59.157584 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:59.157595 | orchestrator | 2026-01-03 00:44:59.157639 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-03 00:44:59.157651 | orchestrator | Saturday 03 January 2026 00:44:58 +0000 (0:00:00.124) 0:01:05.405 ****** 2026-01-03 00:44:59.157662 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124077fc-a709-5275-a3b4-8defea20aa20', 'data_vg': 'ceph-124077fc-a709-5275-a3b4-8defea20aa20'})  2026-01-03 00:44:59.157674 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43153f84-c643-5017-9328-2bdcf330b780', 'data_vg': 'ceph-43153f84-c643-5017-9328-2bdcf330b780'})  2026-01-03 00:44:59.157685 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:59.157696 | orchestrator | 2026-01-03 00:44:59.157707 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-03 00:44:59.157718 | orchestrator | Saturday 03 January 2026 00:44:58 +0000 (0:00:00.146) 0:01:05.552 ****** 2026-01-03 00:44:59.157729 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124077fc-a709-5275-a3b4-8defea20aa20', 'data_vg': 'ceph-124077fc-a709-5275-a3b4-8defea20aa20'})  2026-01-03 00:44:59.157741 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43153f84-c643-5017-9328-2bdcf330b780', 'data_vg': 'ceph-43153f84-c643-5017-9328-2bdcf330b780'})  2026-01-03 00:44:59.157752 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:44:59.157763 | orchestrator | 2026-01-03 00:44:59.157774 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-03 00:44:59.157786 | orchestrator | Saturday 03 January 2026 00:44:58 +0000 (0:00:00.132) 0:01:05.685 ****** 2026-01-03 00:44:59.157805 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124077fc-a709-5275-a3b4-8defea20aa20', 'data_vg': 'ceph-124077fc-a709-5275-a3b4-8defea20aa20'})  2026-01-03 00:45:02.186591 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43153f84-c643-5017-9328-2bdcf330b780', 'data_vg': 'ceph-43153f84-c643-5017-9328-2bdcf330b780'})  2026-01-03 00:45:02.186722 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:45:02.186733 | orchestrator | 2026-01-03 00:45:02.186741 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-03 00:45:02.186750 | orchestrator | Saturday 03 January 2026 00:44:59 +0000 (0:00:00.159) 0:01:05.844 ****** 2026-01-03 00:45:02.186756 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124077fc-a709-5275-a3b4-8defea20aa20', 'data_vg': 'ceph-124077fc-a709-5275-a3b4-8defea20aa20'})  2026-01-03 00:45:02.186763 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43153f84-c643-5017-9328-2bdcf330b780', 'data_vg': 'ceph-43153f84-c643-5017-9328-2bdcf330b780'})  2026-01-03 00:45:02.186769 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:45:02.186776 | orchestrator | 2026-01-03 00:45:02.186782 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-03 00:45:02.186808 | orchestrator | Saturday 03 January 2026 00:44:59 +0000 (0:00:00.138) 0:01:05.982 ****** 2026-01-03 00:45:02.186820 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124077fc-a709-5275-a3b4-8defea20aa20', 'data_vg': 'ceph-124077fc-a709-5275-a3b4-8defea20aa20'})  2026-01-03 00:45:02.186833 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43153f84-c643-5017-9328-2bdcf330b780', 'data_vg': 'ceph-43153f84-c643-5017-9328-2bdcf330b780'})  2026-01-03 00:45:02.186847 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:45:02.186857 | orchestrator | 2026-01-03 00:45:02.186867 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-03 00:45:02.186876 | orchestrator | Saturday 03 January 2026 00:44:59 +0000 (0:00:00.138) 0:01:06.121 ****** 2026-01-03 00:45:02.186886 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124077fc-a709-5275-a3b4-8defea20aa20', 'data_vg': 'ceph-124077fc-a709-5275-a3b4-8defea20aa20'})  2026-01-03 00:45:02.186896 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43153f84-c643-5017-9328-2bdcf330b780', 'data_vg': 'ceph-43153f84-c643-5017-9328-2bdcf330b780'})  2026-01-03 00:45:02.186906 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:45:02.186915 | orchestrator | 2026-01-03 00:45:02.186925 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-03 00:45:02.186935 | orchestrator | Saturday 03 January 2026 00:44:59 +0000 (0:00:00.348) 0:01:06.469 ****** 2026-01-03 00:45:02.186945 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124077fc-a709-5275-a3b4-8defea20aa20', 'data_vg': 'ceph-124077fc-a709-5275-a3b4-8defea20aa20'})  2026-01-03 00:45:02.186956 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43153f84-c643-5017-9328-2bdcf330b780', 'data_vg': 'ceph-43153f84-c643-5017-9328-2bdcf330b780'})  2026-01-03 00:45:02.186968 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:45:02.186978 | orchestrator | 2026-01-03 00:45:02.186988 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-03 00:45:02.186998 | orchestrator | Saturday 03 January 2026 00:44:59 +0000 (0:00:00.162) 0:01:06.631 ****** 2026-01-03 00:45:02.187008 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124077fc-a709-5275-a3b4-8defea20aa20', 'data_vg': 'ceph-124077fc-a709-5275-a3b4-8defea20aa20'})  2026-01-03 00:45:02.187018 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43153f84-c643-5017-9328-2bdcf330b780', 'data_vg': 'ceph-43153f84-c643-5017-9328-2bdcf330b780'})  2026-01-03 00:45:02.187028 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:45:02.187038 | orchestrator | 2026-01-03 00:45:02.187049 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-03 00:45:02.187059 | orchestrator | Saturday 03 January 2026 00:45:00 +0000 (0:00:00.153) 0:01:06.785 ****** 2026-01-03 00:45:02.187069 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:45:02.187082 | orchestrator | 2026-01-03 00:45:02.187091 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-03 00:45:02.187101 | orchestrator | Saturday 03 January 2026 00:45:00 +0000 (0:00:00.551) 0:01:07.337 ****** 2026-01-03 00:45:02.187111 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:45:02.187121 | orchestrator | 2026-01-03 00:45:02.187132 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-03 00:45:02.187142 | orchestrator | Saturday 03 January 2026 00:45:01 +0000 (0:00:00.534) 0:01:07.872 ****** 2026-01-03 00:45:02.187153 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:45:02.187164 | orchestrator | 2026-01-03 00:45:02.187175 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-03 00:45:02.187188 | orchestrator | Saturday 03 January 2026 00:45:01 +0000 (0:00:00.147) 0:01:08.020 ****** 2026-01-03 00:45:02.187199 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-124077fc-a709-5275-a3b4-8defea20aa20', 'vg_name': 'ceph-124077fc-a709-5275-a3b4-8defea20aa20'}) 2026-01-03 00:45:02.187213 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-43153f84-c643-5017-9328-2bdcf330b780', 'vg_name': 'ceph-43153f84-c643-5017-9328-2bdcf330b780'}) 2026-01-03 00:45:02.187235 | orchestrator | 2026-01-03 00:45:02.187247 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-03 00:45:02.187257 | orchestrator | Saturday 03 January 2026 00:45:01 +0000 (0:00:00.211) 0:01:08.231 ****** 2026-01-03 00:45:02.187304 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124077fc-a709-5275-a3b4-8defea20aa20', 'data_vg': 'ceph-124077fc-a709-5275-a3b4-8defea20aa20'})  2026-01-03 00:45:02.187317 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43153f84-c643-5017-9328-2bdcf330b780', 'data_vg': 'ceph-43153f84-c643-5017-9328-2bdcf330b780'})  2026-01-03 00:45:02.187328 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:45:02.187339 | orchestrator | 2026-01-03 00:45:02.187350 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-03 00:45:02.187361 | orchestrator | Saturday 03 January 2026 00:45:01 +0000 (0:00:00.155) 0:01:08.387 ****** 2026-01-03 00:45:02.187371 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124077fc-a709-5275-a3b4-8defea20aa20', 'data_vg': 'ceph-124077fc-a709-5275-a3b4-8defea20aa20'})  2026-01-03 00:45:02.187383 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43153f84-c643-5017-9328-2bdcf330b780', 'data_vg': 'ceph-43153f84-c643-5017-9328-2bdcf330b780'})  2026-01-03 00:45:02.187393 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:45:02.187404 | orchestrator | 2026-01-03 00:45:02.187415 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-03 00:45:02.187425 | orchestrator | Saturday 03 January 2026 00:45:01 +0000 (0:00:00.158) 0:01:08.546 ****** 2026-01-03 00:45:02.187436 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-124077fc-a709-5275-a3b4-8defea20aa20', 'data_vg': 'ceph-124077fc-a709-5275-a3b4-8defea20aa20'})  2026-01-03 00:45:02.187448 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43153f84-c643-5017-9328-2bdcf330b780', 'data_vg': 'ceph-43153f84-c643-5017-9328-2bdcf330b780'})  2026-01-03 00:45:02.187458 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:45:02.187468 | orchestrator | 2026-01-03 00:45:02.187478 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-03 00:45:02.187491 | orchestrator | Saturday 03 January 2026 00:45:02 +0000 (0:00:00.157) 0:01:08.703 ****** 2026-01-03 00:45:02.187501 | orchestrator | ok: [testbed-node-5] => { 2026-01-03 00:45:02.187511 | orchestrator |  "lvm_report": { 2026-01-03 00:45:02.187521 | orchestrator |  "lv": [ 2026-01-03 00:45:02.187532 | orchestrator |  { 2026-01-03 00:45:02.187549 | orchestrator |  "lv_name": "osd-block-124077fc-a709-5275-a3b4-8defea20aa20", 2026-01-03 00:45:02.187561 | orchestrator |  "vg_name": "ceph-124077fc-a709-5275-a3b4-8defea20aa20" 2026-01-03 00:45:02.187572 | orchestrator |  }, 2026-01-03 00:45:02.187583 | orchestrator |  { 2026-01-03 00:45:02.187594 | orchestrator |  "lv_name": "osd-block-43153f84-c643-5017-9328-2bdcf330b780", 2026-01-03 00:45:02.187660 | orchestrator |  "vg_name": "ceph-43153f84-c643-5017-9328-2bdcf330b780" 2026-01-03 00:45:02.187672 | orchestrator |  } 2026-01-03 00:45:02.187683 | orchestrator |  ], 2026-01-03 00:45:02.187694 | orchestrator |  "pv": [ 2026-01-03 00:45:02.187706 | orchestrator |  { 2026-01-03 00:45:02.187716 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-03 00:45:02.187727 | orchestrator |  "vg_name": "ceph-124077fc-a709-5275-a3b4-8defea20aa20" 2026-01-03 00:45:02.187737 | orchestrator |  }, 2026-01-03 00:45:02.187747 | orchestrator |  { 2026-01-03 00:45:02.187757 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-03 00:45:02.187768 | orchestrator |  "vg_name": "ceph-43153f84-c643-5017-9328-2bdcf330b780" 2026-01-03 00:45:02.187779 | orchestrator |  } 2026-01-03 00:45:02.187788 | orchestrator |  ] 2026-01-03 00:45:02.187840 | orchestrator |  } 2026-01-03 00:45:02.187852 | orchestrator | } 2026-01-03 00:45:02.187863 | orchestrator | 2026-01-03 00:45:02.187874 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:45:02.187885 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-03 00:45:02.187897 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-03 00:45:02.187908 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-03 00:45:02.187919 | orchestrator | 2026-01-03 00:45:02.187931 | orchestrator | 2026-01-03 00:45:02.187941 | orchestrator | 2026-01-03 00:45:02.187953 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:45:02.187964 | orchestrator | Saturday 03 January 2026 00:45:02 +0000 (0:00:00.145) 0:01:08.849 ****** 2026-01-03 00:45:02.187975 | orchestrator | =============================================================================== 2026-01-03 00:45:02.187987 | orchestrator | Create block VGs -------------------------------------------------------- 5.45s 2026-01-03 00:45:02.187998 | orchestrator | Create block LVs -------------------------------------------------------- 4.10s 2026-01-03 00:45:02.188008 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.75s 2026-01-03 00:45:02.188020 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.72s 2026-01-03 00:45:02.188030 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.69s 2026-01-03 00:45:02.188042 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.68s 2026-01-03 00:45:02.188052 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.64s 2026-01-03 00:45:02.188063 | orchestrator | Add known partitions to the list of available block devices ------------- 1.44s 2026-01-03 00:45:02.188085 | orchestrator | Add known links to the list of available block devices ------------------ 1.35s 2026-01-03 00:45:02.566170 | orchestrator | Print LVM report data --------------------------------------------------- 0.87s 2026-01-03 00:45:02.566269 | orchestrator | Add known partitions to the list of available block devices ------------- 0.85s 2026-01-03 00:45:02.566283 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.79s 2026-01-03 00:45:02.566295 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-01-03 00:45:02.566306 | orchestrator | Get initial list of available block devices ----------------------------- 0.68s 2026-01-03 00:45:02.566318 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.67s 2026-01-03 00:45:02.566329 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.67s 2026-01-03 00:45:02.566340 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2026-01-03 00:45:02.566351 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.66s 2026-01-03 00:45:02.566362 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2026-01-03 00:45:02.566373 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.65s 2026-01-03 00:45:15.041245 | orchestrator | 2026-01-03 00:45:15 | INFO  | Task 677636df-aa51-4564-acc8-f4d75ef8fd49 (facts) was prepared for execution. 2026-01-03 00:45:15.041384 | orchestrator | 2026-01-03 00:45:15 | INFO  | It takes a moment until task 677636df-aa51-4564-acc8-f4d75ef8fd49 (facts) has been started and output is visible here. 2026-01-03 00:45:26.494274 | orchestrator | 2026-01-03 00:45:26.494365 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-03 00:45:26.494378 | orchestrator | 2026-01-03 00:45:26.494387 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-03 00:45:26.494396 | orchestrator | Saturday 03 January 2026 00:45:19 +0000 (0:00:00.253) 0:00:00.253 ****** 2026-01-03 00:45:26.494430 | orchestrator | ok: [testbed-manager] 2026-01-03 00:45:26.494439 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:45:26.494446 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:45:26.494453 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:45:26.494461 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:45:26.494468 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:45:26.494476 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:45:26.494483 | orchestrator | 2026-01-03 00:45:26.494503 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-03 00:45:26.494512 | orchestrator | Saturday 03 January 2026 00:45:20 +0000 (0:00:00.934) 0:00:01.187 ****** 2026-01-03 00:45:26.494519 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:45:26.494527 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:45:26.494534 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:45:26.494541 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:45:26.494549 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:45:26.494556 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:45:26.494564 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:45:26.494571 | orchestrator | 2026-01-03 00:45:26.494578 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-03 00:45:26.494585 | orchestrator | 2026-01-03 00:45:26.494651 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-03 00:45:26.494661 | orchestrator | Saturday 03 January 2026 00:45:21 +0000 (0:00:00.983) 0:00:02.171 ****** 2026-01-03 00:45:26.494665 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:45:26.494670 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:45:26.494674 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:45:26.494679 | orchestrator | ok: [testbed-manager] 2026-01-03 00:45:26.494683 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:45:26.494687 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:45:26.494692 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:45:26.494699 | orchestrator | 2026-01-03 00:45:26.494706 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-03 00:45:26.494714 | orchestrator | 2026-01-03 00:45:26.494721 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-03 00:45:26.494728 | orchestrator | Saturday 03 January 2026 00:45:25 +0000 (0:00:04.639) 0:00:06.810 ****** 2026-01-03 00:45:26.494735 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:45:26.494742 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:45:26.494750 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:45:26.494757 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:45:26.494764 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:45:26.494771 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:45:26.494778 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:45:26.494786 | orchestrator | 2026-01-03 00:45:26.494793 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:45:26.494800 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:45:26.494809 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:45:26.494816 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:45:26.494824 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:45:26.494831 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:45:26.494839 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:45:26.494856 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:45:26.494864 | orchestrator | 2026-01-03 00:45:26.494871 | orchestrator | 2026-01-03 00:45:26.494878 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:45:26.494885 | orchestrator | Saturday 03 January 2026 00:45:26 +0000 (0:00:00.448) 0:00:07.259 ****** 2026-01-03 00:45:26.494893 | orchestrator | =============================================================================== 2026-01-03 00:45:26.494901 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.64s 2026-01-03 00:45:26.494909 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 0.98s 2026-01-03 00:45:26.494916 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.93s 2026-01-03 00:45:26.494924 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2026-01-03 00:45:38.729218 | orchestrator | 2026-01-03 00:45:38 | INFO  | Task f799bb7b-327f-4db9-8e2d-806fcfc5b879 (frr) was prepared for execution. 2026-01-03 00:45:38.729286 | orchestrator | 2026-01-03 00:45:38 | INFO  | It takes a moment until task f799bb7b-327f-4db9-8e2d-806fcfc5b879 (frr) has been started and output is visible here. 2026-01-03 00:46:03.665746 | orchestrator | 2026-01-03 00:46:03.665846 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-01-03 00:46:03.665860 | orchestrator | 2026-01-03 00:46:03.665870 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-01-03 00:46:03.665878 | orchestrator | Saturday 03 January 2026 00:45:43 +0000 (0:00:00.226) 0:00:00.226 ****** 2026-01-03 00:46:03.665887 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-01-03 00:46:03.665897 | orchestrator | 2026-01-03 00:46:03.665905 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-01-03 00:46:03.665913 | orchestrator | Saturday 03 January 2026 00:45:43 +0000 (0:00:00.229) 0:00:00.456 ****** 2026-01-03 00:46:03.665922 | orchestrator | changed: [testbed-manager] 2026-01-03 00:46:03.665930 | orchestrator | 2026-01-03 00:46:03.666128 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-01-03 00:46:03.666175 | orchestrator | Saturday 03 January 2026 00:45:44 +0000 (0:00:01.157) 0:00:01.613 ****** 2026-01-03 00:46:03.666192 | orchestrator | changed: [testbed-manager] 2026-01-03 00:46:03.666205 | orchestrator | 2026-01-03 00:46:03.666219 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-01-03 00:46:03.666232 | orchestrator | Saturday 03 January 2026 00:45:53 +0000 (0:00:09.531) 0:00:11.145 ****** 2026-01-03 00:46:03.666246 | orchestrator | ok: [testbed-manager] 2026-01-03 00:46:03.666260 | orchestrator | 2026-01-03 00:46:03.666274 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-01-03 00:46:03.666289 | orchestrator | Saturday 03 January 2026 00:45:54 +0000 (0:00:00.987) 0:00:12.133 ****** 2026-01-03 00:46:03.666304 | orchestrator | changed: [testbed-manager] 2026-01-03 00:46:03.666318 | orchestrator | 2026-01-03 00:46:03.666333 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-01-03 00:46:03.666344 | orchestrator | Saturday 03 January 2026 00:45:55 +0000 (0:00:00.901) 0:00:13.034 ****** 2026-01-03 00:46:03.666357 | orchestrator | ok: [testbed-manager] 2026-01-03 00:46:03.666376 | orchestrator | 2026-01-03 00:46:03.666395 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-01-03 00:46:03.666410 | orchestrator | Saturday 03 January 2026 00:45:56 +0000 (0:00:01.147) 0:00:14.182 ****** 2026-01-03 00:46:03.666423 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:46:03.666436 | orchestrator | 2026-01-03 00:46:03.666449 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-01-03 00:46:03.666464 | orchestrator | Saturday 03 January 2026 00:45:57 +0000 (0:00:00.127) 0:00:14.310 ****** 2026-01-03 00:46:03.666506 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:46:03.666521 | orchestrator | 2026-01-03 00:46:03.666535 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-01-03 00:46:03.666548 | orchestrator | Saturday 03 January 2026 00:45:57 +0000 (0:00:00.143) 0:00:14.454 ****** 2026-01-03 00:46:03.666562 | orchestrator | changed: [testbed-manager] 2026-01-03 00:46:03.666604 | orchestrator | 2026-01-03 00:46:03.666618 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-01-03 00:46:03.666632 | orchestrator | Saturday 03 January 2026 00:45:58 +0000 (0:00:00.970) 0:00:15.424 ****** 2026-01-03 00:46:03.666644 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-01-03 00:46:03.666655 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-01-03 00:46:03.666669 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-01-03 00:46:03.666682 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-01-03 00:46:03.666695 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-01-03 00:46:03.666707 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-01-03 00:46:03.666719 | orchestrator | 2026-01-03 00:46:03.666731 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-01-03 00:46:03.666744 | orchestrator | Saturday 03 January 2026 00:46:00 +0000 (0:00:02.233) 0:00:17.658 ****** 2026-01-03 00:46:03.666756 | orchestrator | ok: [testbed-manager] 2026-01-03 00:46:03.666769 | orchestrator | 2026-01-03 00:46:03.666783 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-01-03 00:46:03.666797 | orchestrator | Saturday 03 January 2026 00:46:02 +0000 (0:00:01.561) 0:00:19.220 ****** 2026-01-03 00:46:03.666810 | orchestrator | changed: [testbed-manager] 2026-01-03 00:46:03.666824 | orchestrator | 2026-01-03 00:46:03.666833 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:46:03.666842 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:46:03.666851 | orchestrator | 2026-01-03 00:46:03.666859 | orchestrator | 2026-01-03 00:46:03.666867 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:46:03.666875 | orchestrator | Saturday 03 January 2026 00:46:03 +0000 (0:00:01.382) 0:00:20.603 ****** 2026-01-03 00:46:03.666882 | orchestrator | =============================================================================== 2026-01-03 00:46:03.666908 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.53s 2026-01-03 00:46:03.666917 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.23s 2026-01-03 00:46:03.666986 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.56s 2026-01-03 00:46:03.666995 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.38s 2026-01-03 00:46:03.667004 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.16s 2026-01-03 00:46:03.667033 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.15s 2026-01-03 00:46:03.667042 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.99s 2026-01-03 00:46:03.667050 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.97s 2026-01-03 00:46:03.667058 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.90s 2026-01-03 00:46:03.667066 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2026-01-03 00:46:03.667074 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.14s 2026-01-03 00:46:03.667083 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.13s 2026-01-03 00:46:03.953878 | orchestrator | 2026-01-03 00:46:03.957093 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Jan 3 00:46:03 UTC 2026 2026-01-03 00:46:03.957165 | orchestrator | 2026-01-03 00:46:05.886178 | orchestrator | 2026-01-03 00:46:05 | INFO  | Collection nutshell is prepared for execution 2026-01-03 00:46:05.886298 | orchestrator | 2026-01-03 00:46:05 | INFO  | A [0] - dotfiles 2026-01-03 00:46:16.051067 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [0] - homer 2026-01-03 00:46:16.051117 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [0] - netdata 2026-01-03 00:46:16.051123 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [0] - openstackclient 2026-01-03 00:46:16.051381 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [0] - phpmyadmin 2026-01-03 00:46:16.051756 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [0] - common 2026-01-03 00:46:16.055721 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [1] -- loadbalancer 2026-01-03 00:46:16.055763 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [2] --- opensearch 2026-01-03 00:46:16.056167 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [2] --- mariadb-ng 2026-01-03 00:46:16.056205 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [3] ---- horizon 2026-01-03 00:46:16.056318 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [3] ---- keystone 2026-01-03 00:46:16.056723 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [4] ----- neutron 2026-01-03 00:46:16.056937 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [5] ------ wait-for-nova 2026-01-03 00:46:16.057182 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [6] ------- octavia 2026-01-03 00:46:16.059159 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [4] ----- barbican 2026-01-03 00:46:16.059210 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [4] ----- designate 2026-01-03 00:46:16.059220 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [4] ----- ironic 2026-01-03 00:46:16.059229 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [4] ----- placement 2026-01-03 00:46:16.059598 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [4] ----- magnum 2026-01-03 00:46:16.060055 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [1] -- openvswitch 2026-01-03 00:46:16.060317 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [2] --- ovn 2026-01-03 00:46:16.060846 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [1] -- memcached 2026-01-03 00:46:16.060872 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [1] -- redis 2026-01-03 00:46:16.060880 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [1] -- rabbitmq-ng 2026-01-03 00:46:16.061465 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [0] - kubernetes 2026-01-03 00:46:16.063728 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [1] -- kubeconfig 2026-01-03 00:46:16.063765 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [1] -- copy-kubeconfig 2026-01-03 00:46:16.064147 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [0] - ceph 2026-01-03 00:46:16.066859 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [1] -- ceph-pools 2026-01-03 00:46:16.066892 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [2] --- copy-ceph-keys 2026-01-03 00:46:16.066898 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [3] ---- cephclient 2026-01-03 00:46:16.066903 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-01-03 00:46:16.067212 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [4] ----- wait-for-keystone 2026-01-03 00:46:16.067241 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [5] ------ kolla-ceph-rgw 2026-01-03 00:46:16.067246 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [5] ------ glance 2026-01-03 00:46:16.067264 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [5] ------ cinder 2026-01-03 00:46:16.067269 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [5] ------ nova 2026-01-03 00:46:16.067678 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [4] ----- prometheus 2026-01-03 00:46:16.067711 | orchestrator | 2026-01-03 00:46:16 | INFO  | A [5] ------ grafana 2026-01-03 00:46:16.261746 | orchestrator | 2026-01-03 00:46:16 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-01-03 00:46:16.261843 | orchestrator | 2026-01-03 00:46:16 | INFO  | Tasks are running in the background 2026-01-03 00:46:18.925388 | orchestrator | 2026-01-03 00:46:18 | INFO  | No task IDs specified, wait for all currently running tasks 2026-01-03 00:46:21.018371 | orchestrator | 2026-01-03 00:46:21 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:46:21.018454 | orchestrator | 2026-01-03 00:46:21 | INFO  | Task eb88b932-988a-4ecb-bfb7-bff76503a26d is in state STARTED 2026-01-03 00:46:21.018924 | orchestrator | 2026-01-03 00:46:21 | INFO  | Task 8cb36722-bdf1-4a63-a30c-4dfd2f60232a is in state STARTED 2026-01-03 00:46:21.019386 | orchestrator | 2026-01-03 00:46:21 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:46:21.019970 | orchestrator | 2026-01-03 00:46:21 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:46:21.020526 | orchestrator | 2026-01-03 00:46:21 | INFO  | Task 2663aef0-4c6e-40eb-a423-f5e5e12fcee4 is in state STARTED 2026-01-03 00:46:21.021078 | orchestrator | 2026-01-03 00:46:21 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:46:21.021112 | orchestrator | 2026-01-03 00:46:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:24.059959 | orchestrator | 2026-01-03 00:46:24 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:46:24.061473 | orchestrator | 2026-01-03 00:46:24 | INFO  | Task eb88b932-988a-4ecb-bfb7-bff76503a26d is in state STARTED 2026-01-03 00:46:24.062163 | orchestrator | 2026-01-03 00:46:24 | INFO  | Task 8cb36722-bdf1-4a63-a30c-4dfd2f60232a is in state STARTED 2026-01-03 00:46:24.064047 | orchestrator | 2026-01-03 00:46:24 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:46:24.066650 | orchestrator | 2026-01-03 00:46:24 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:46:24.066807 | orchestrator | 2026-01-03 00:46:24 | INFO  | Task 2663aef0-4c6e-40eb-a423-f5e5e12fcee4 is in state STARTED 2026-01-03 00:46:24.067330 | orchestrator | 2026-01-03 00:46:24 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:46:24.067376 | orchestrator | 2026-01-03 00:46:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:27.113158 | orchestrator | 2026-01-03 00:46:27 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:46:27.116305 | orchestrator | 2026-01-03 00:46:27 | INFO  | Task eb88b932-988a-4ecb-bfb7-bff76503a26d is in state STARTED 2026-01-03 00:46:27.116975 | orchestrator | 2026-01-03 00:46:27 | INFO  | Task 8cb36722-bdf1-4a63-a30c-4dfd2f60232a is in state STARTED 2026-01-03 00:46:27.120721 | orchestrator | 2026-01-03 00:46:27 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:46:27.120984 | orchestrator | 2026-01-03 00:46:27 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:46:27.121618 | orchestrator | 2026-01-03 00:46:27 | INFO  | Task 2663aef0-4c6e-40eb-a423-f5e5e12fcee4 is in state STARTED 2026-01-03 00:46:27.122678 | orchestrator | 2026-01-03 00:46:27 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:46:27.124667 | orchestrator | 2026-01-03 00:46:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:30.239774 | orchestrator | 2026-01-03 00:46:30 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:46:30.240374 | orchestrator | 2026-01-03 00:46:30 | INFO  | Task eb88b932-988a-4ecb-bfb7-bff76503a26d is in state STARTED 2026-01-03 00:46:30.241071 | orchestrator | 2026-01-03 00:46:30 | INFO  | Task 8cb36722-bdf1-4a63-a30c-4dfd2f60232a is in state STARTED 2026-01-03 00:46:30.241899 | orchestrator | 2026-01-03 00:46:30 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:46:30.242508 | orchestrator | 2026-01-03 00:46:30 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:46:30.243145 | orchestrator | 2026-01-03 00:46:30 | INFO  | Task 2663aef0-4c6e-40eb-a423-f5e5e12fcee4 is in state STARTED 2026-01-03 00:46:30.244029 | orchestrator | 2026-01-03 00:46:30 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:46:30.245415 | orchestrator | 2026-01-03 00:46:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:33.279302 | orchestrator | 2026-01-03 00:46:33 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:46:33.279471 | orchestrator | 2026-01-03 00:46:33 | INFO  | Task eb88b932-988a-4ecb-bfb7-bff76503a26d is in state STARTED 2026-01-03 00:46:33.280284 | orchestrator | 2026-01-03 00:46:33 | INFO  | Task 8cb36722-bdf1-4a63-a30c-4dfd2f60232a is in state STARTED 2026-01-03 00:46:33.280880 | orchestrator | 2026-01-03 00:46:33 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:46:33.281717 | orchestrator | 2026-01-03 00:46:33 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:46:33.282725 | orchestrator | 2026-01-03 00:46:33 | INFO  | Task 2663aef0-4c6e-40eb-a423-f5e5e12fcee4 is in state STARTED 2026-01-03 00:46:33.283182 | orchestrator | 2026-01-03 00:46:33 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:46:33.283462 | orchestrator | 2026-01-03 00:46:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:36.465975 | orchestrator | 2026-01-03 00:46:36 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:46:36.466072 | orchestrator | 2026-01-03 00:46:36 | INFO  | Task eb88b932-988a-4ecb-bfb7-bff76503a26d is in state STARTED 2026-01-03 00:46:36.466080 | orchestrator | 2026-01-03 00:46:36 | INFO  | Task 8cb36722-bdf1-4a63-a30c-4dfd2f60232a is in state STARTED 2026-01-03 00:46:36.466085 | orchestrator | 2026-01-03 00:46:36 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:46:36.466089 | orchestrator | 2026-01-03 00:46:36 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:46:36.466092 | orchestrator | 2026-01-03 00:46:36 | INFO  | Task 2663aef0-4c6e-40eb-a423-f5e5e12fcee4 is in state STARTED 2026-01-03 00:46:36.466096 | orchestrator | 2026-01-03 00:46:36 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:46:36.466101 | orchestrator | 2026-01-03 00:46:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:39.490429 | orchestrator | 2026-01-03 00:46:39 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:46:39.522801 | orchestrator | 2026-01-03 00:46:39 | INFO  | Task eb88b932-988a-4ecb-bfb7-bff76503a26d is in state STARTED 2026-01-03 00:46:39.522890 | orchestrator | 2026-01-03 00:46:39 | INFO  | Task 8cb36722-bdf1-4a63-a30c-4dfd2f60232a is in state STARTED 2026-01-03 00:46:39.522929 | orchestrator | 2026-01-03 00:46:39 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:46:39.522938 | orchestrator | 2026-01-03 00:46:39 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:46:39.522946 | orchestrator | 2026-01-03 00:46:39 | INFO  | Task 2663aef0-4c6e-40eb-a423-f5e5e12fcee4 is in state STARTED 2026-01-03 00:46:39.522954 | orchestrator | 2026-01-03 00:46:39 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:46:39.522963 | orchestrator | 2026-01-03 00:46:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:42.585066 | orchestrator | 2026-01-03 00:46:42 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:46:42.585162 | orchestrator | 2026-01-03 00:46:42 | INFO  | Task eb88b932-988a-4ecb-bfb7-bff76503a26d is in state STARTED 2026-01-03 00:46:42.585173 | orchestrator | 2026-01-03 00:46:42 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:46:42.589005 | orchestrator | 2026-01-03 00:46:42.589086 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-01-03 00:46:42.589095 | orchestrator | 2026-01-03 00:46:42.589103 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-01-03 00:46:42.589110 | orchestrator | Saturday 03 January 2026 00:46:27 +0000 (0:00:00.578) 0:00:00.578 ****** 2026-01-03 00:46:42.589117 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:46:42.589124 | orchestrator | changed: [testbed-manager] 2026-01-03 00:46:42.589131 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:46:42.589137 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:46:42.589144 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:46:42.589150 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:46:42.589157 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:46:42.589163 | orchestrator | 2026-01-03 00:46:42.589170 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-01-03 00:46:42.589177 | orchestrator | Saturday 03 January 2026 00:46:32 +0000 (0:00:04.683) 0:00:05.262 ****** 2026-01-03 00:46:42.589184 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-03 00:46:42.589192 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-03 00:46:42.589198 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-03 00:46:42.589205 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-03 00:46:42.589211 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-03 00:46:42.589219 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-03 00:46:42.589226 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-03 00:46:42.589232 | orchestrator | 2026-01-03 00:46:42.589239 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-01-03 00:46:42.589246 | orchestrator | Saturday 03 January 2026 00:46:33 +0000 (0:00:01.329) 0:00:06.592 ****** 2026-01-03 00:46:42.589262 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-03 00:46:33.268891', 'end': '2026-01-03 00:46:33.272500', 'delta': '0:00:00.003609', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-03 00:46:42.589276 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-03 00:46:33.250055', 'end': '2026-01-03 00:46:33.256624', 'delta': '0:00:00.006569', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-03 00:46:42.589302 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-03 00:46:33.238300', 'end': '2026-01-03 00:46:33.244382', 'delta': '0:00:00.006082', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-03 00:46:42.589330 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-03 00:46:33.250479', 'end': '2026-01-03 00:46:33.257245', 'delta': '0:00:00.006766', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-03 00:46:42.589336 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-03 00:46:33.188375', 'end': '2026-01-03 00:46:33.195733', 'delta': '0:00:00.007358', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-03 00:46:42.589346 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-03 00:46:33.170562', 'end': '2026-01-03 00:46:33.177971', 'delta': '0:00:00.007409', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-03 00:46:42.589361 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-03 00:46:33.152520', 'end': '2026-01-03 00:46:33.158927', 'delta': '0:00:00.006407', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-03 00:46:42.589368 | orchestrator | 2026-01-03 00:46:42.589374 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-01-03 00:46:42.589380 | orchestrator | Saturday 03 January 2026 00:46:34 +0000 (0:00:01.518) 0:00:08.111 ****** 2026-01-03 00:46:42.589387 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-03 00:46:42.589394 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-03 00:46:42.589400 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-03 00:46:42.589407 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-03 00:46:42.589413 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-03 00:46:42.589419 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-03 00:46:42.589425 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-03 00:46:42.589428 | orchestrator | 2026-01-03 00:46:42.589432 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-01-03 00:46:42.589436 | orchestrator | Saturday 03 January 2026 00:46:36 +0000 (0:00:01.477) 0:00:09.588 ****** 2026-01-03 00:46:42.589440 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-01-03 00:46:42.589446 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-01-03 00:46:42.589452 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-01-03 00:46:42.589458 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-01-03 00:46:42.589462 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-01-03 00:46:42.589465 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-01-03 00:46:42.589469 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-01-03 00:46:42.589473 | orchestrator | 2026-01-03 00:46:42.589478 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:46:42.589490 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:46:42.589499 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:46:42.589505 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:46:42.589511 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:46:42.589517 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:46:42.589523 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:46:42.589530 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:46:42.589543 | orchestrator | 2026-01-03 00:46:42.589549 | orchestrator | 2026-01-03 00:46:42.589574 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:46:42.589579 | orchestrator | Saturday 03 January 2026 00:46:39 +0000 (0:00:03.130) 0:00:12.719 ****** 2026-01-03 00:46:42.589583 | orchestrator | =============================================================================== 2026-01-03 00:46:42.589587 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.68s 2026-01-03 00:46:42.589591 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.13s 2026-01-03 00:46:42.589595 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.52s 2026-01-03 00:46:42.589599 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.48s 2026-01-03 00:46:42.589602 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.33s 2026-01-03 00:46:42.589606 | orchestrator | 2026-01-03 00:46:42 | INFO  | Task 8cb36722-bdf1-4a63-a30c-4dfd2f60232a is in state SUCCESS 2026-01-03 00:46:42.589611 | orchestrator | 2026-01-03 00:46:42 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:46:42.591741 | orchestrator | 2026-01-03 00:46:42 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:46:42.594624 | orchestrator | 2026-01-03 00:46:42 | INFO  | Task 2663aef0-4c6e-40eb-a423-f5e5e12fcee4 is in state STARTED 2026-01-03 00:46:42.595798 | orchestrator | 2026-01-03 00:46:42 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:46:42.595849 | orchestrator | 2026-01-03 00:46:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:45.655353 | orchestrator | 2026-01-03 00:46:45 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:46:45.655986 | orchestrator | 2026-01-03 00:46:45 | INFO  | Task eb88b932-988a-4ecb-bfb7-bff76503a26d is in state STARTED 2026-01-03 00:46:45.661321 | orchestrator | 2026-01-03 00:46:45 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:46:45.667174 | orchestrator | 2026-01-03 00:46:45 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:46:45.671329 | orchestrator | 2026-01-03 00:46:45 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:46:45.673128 | orchestrator | 2026-01-03 00:46:45 | INFO  | Task 2663aef0-4c6e-40eb-a423-f5e5e12fcee4 is in state STARTED 2026-01-03 00:46:45.675740 | orchestrator | 2026-01-03 00:46:45 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:46:45.675964 | orchestrator | 2026-01-03 00:46:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:48.779286 | orchestrator | 2026-01-03 00:46:48 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:46:48.779343 | orchestrator | 2026-01-03 00:46:48 | INFO  | Task eb88b932-988a-4ecb-bfb7-bff76503a26d is in state STARTED 2026-01-03 00:46:48.779350 | orchestrator | 2026-01-03 00:46:48 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:46:48.779356 | orchestrator | 2026-01-03 00:46:48 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:46:48.779362 | orchestrator | 2026-01-03 00:46:48 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:46:48.779367 | orchestrator | 2026-01-03 00:46:48 | INFO  | Task 2663aef0-4c6e-40eb-a423-f5e5e12fcee4 is in state STARTED 2026-01-03 00:46:48.779373 | orchestrator | 2026-01-03 00:46:48 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:46:48.779379 | orchestrator | 2026-01-03 00:46:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:51.805633 | orchestrator | 2026-01-03 00:46:51 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:46:51.806051 | orchestrator | 2026-01-03 00:46:51 | INFO  | Task eb88b932-988a-4ecb-bfb7-bff76503a26d is in state STARTED 2026-01-03 00:46:51.807819 | orchestrator | 2026-01-03 00:46:51 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:46:51.809008 | orchestrator | 2026-01-03 00:46:51 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:46:51.809766 | orchestrator | 2026-01-03 00:46:51 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:46:51.810347 | orchestrator | 2026-01-03 00:46:51 | INFO  | Task 2663aef0-4c6e-40eb-a423-f5e5e12fcee4 is in state STARTED 2026-01-03 00:46:51.810997 | orchestrator | 2026-01-03 00:46:51 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:46:51.811028 | orchestrator | 2026-01-03 00:46:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:54.861244 | orchestrator | 2026-01-03 00:46:54 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:46:54.861300 | orchestrator | 2026-01-03 00:46:54 | INFO  | Task eb88b932-988a-4ecb-bfb7-bff76503a26d is in state STARTED 2026-01-03 00:46:54.861309 | orchestrator | 2026-01-03 00:46:54 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:46:54.861316 | orchestrator | 2026-01-03 00:46:54 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:46:54.861322 | orchestrator | 2026-01-03 00:46:54 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:46:54.861327 | orchestrator | 2026-01-03 00:46:54 | INFO  | Task 2663aef0-4c6e-40eb-a423-f5e5e12fcee4 is in state STARTED 2026-01-03 00:46:54.861333 | orchestrator | 2026-01-03 00:46:54 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:46:54.861338 | orchestrator | 2026-01-03 00:46:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:46:57.889016 | orchestrator | 2026-01-03 00:46:57 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:46:57.889531 | orchestrator | 2026-01-03 00:46:57 | INFO  | Task eb88b932-988a-4ecb-bfb7-bff76503a26d is in state STARTED 2026-01-03 00:46:57.890144 | orchestrator | 2026-01-03 00:46:57 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:46:57.892850 | orchestrator | 2026-01-03 00:46:57 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:46:57.893358 | orchestrator | 2026-01-03 00:46:57 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:46:57.894141 | orchestrator | 2026-01-03 00:46:57 | INFO  | Task 2663aef0-4c6e-40eb-a423-f5e5e12fcee4 is in state STARTED 2026-01-03 00:46:57.894868 | orchestrator | 2026-01-03 00:46:57 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:46:57.894890 | orchestrator | 2026-01-03 00:46:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:00.952017 | orchestrator | 2026-01-03 00:47:00 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:47:00.952079 | orchestrator | 2026-01-03 00:47:00 | INFO  | Task eb88b932-988a-4ecb-bfb7-bff76503a26d is in state STARTED 2026-01-03 00:47:00.953090 | orchestrator | 2026-01-03 00:47:00 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:47:00.953350 | orchestrator | 2026-01-03 00:47:00 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:47:00.954083 | orchestrator | 2026-01-03 00:47:00 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:47:00.954665 | orchestrator | 2026-01-03 00:47:00 | INFO  | Task 2663aef0-4c6e-40eb-a423-f5e5e12fcee4 is in state STARTED 2026-01-03 00:47:00.958153 | orchestrator | 2026-01-03 00:47:00 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:47:00.958202 | orchestrator | 2026-01-03 00:47:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:04.103679 | orchestrator | 2026-01-03 00:47:04 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:47:04.103738 | orchestrator | 2026-01-03 00:47:04 | INFO  | Task eb88b932-988a-4ecb-bfb7-bff76503a26d is in state STARTED 2026-01-03 00:47:04.103747 | orchestrator | 2026-01-03 00:47:04 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:47:04.103753 | orchestrator | 2026-01-03 00:47:04 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:47:04.103759 | orchestrator | 2026-01-03 00:47:04 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:47:04.103765 | orchestrator | 2026-01-03 00:47:04 | INFO  | Task 2663aef0-4c6e-40eb-a423-f5e5e12fcee4 is in state STARTED 2026-01-03 00:47:04.105057 | orchestrator | 2026-01-03 00:47:04 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:47:04.105097 | orchestrator | 2026-01-03 00:47:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:07.177679 | orchestrator | 2026-01-03 00:47:07 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:47:07.179069 | orchestrator | 2026-01-03 00:47:07 | INFO  | Task eb88b932-988a-4ecb-bfb7-bff76503a26d is in state STARTED 2026-01-03 00:47:07.179974 | orchestrator | 2026-01-03 00:47:07 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:47:07.180516 | orchestrator | 2026-01-03 00:47:07 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:47:07.181871 | orchestrator | 2026-01-03 00:47:07 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:47:07.183356 | orchestrator | 2026-01-03 00:47:07 | INFO  | Task 2663aef0-4c6e-40eb-a423-f5e5e12fcee4 is in state SUCCESS 2026-01-03 00:47:07.184985 | orchestrator | 2026-01-03 00:47:07 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:47:07.185031 | orchestrator | 2026-01-03 00:47:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:10.237293 | orchestrator | 2026-01-03 00:47:10 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:47:10.237350 | orchestrator | 2026-01-03 00:47:10 | INFO  | Task eb88b932-988a-4ecb-bfb7-bff76503a26d is in state STARTED 2026-01-03 00:47:10.240885 | orchestrator | 2026-01-03 00:47:10 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:47:10.242871 | orchestrator | 2026-01-03 00:47:10 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:47:10.244723 | orchestrator | 2026-01-03 00:47:10 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:47:10.244745 | orchestrator | 2026-01-03 00:47:10 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:47:10.244749 | orchestrator | 2026-01-03 00:47:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:13.307422 | orchestrator | 2026-01-03 00:47:13 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:47:13.308146 | orchestrator | 2026-01-03 00:47:13 | INFO  | Task eb88b932-988a-4ecb-bfb7-bff76503a26d is in state STARTED 2026-01-03 00:47:13.321884 | orchestrator | 2026-01-03 00:47:13 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:47:13.321945 | orchestrator | 2026-01-03 00:47:13 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:47:13.321959 | orchestrator | 2026-01-03 00:47:13 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:47:13.321965 | orchestrator | 2026-01-03 00:47:13 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:47:13.321971 | orchestrator | 2026-01-03 00:47:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:16.352096 | orchestrator | 2026-01-03 00:47:16 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:47:16.353716 | orchestrator | 2026-01-03 00:47:16 | INFO  | Task eb88b932-988a-4ecb-bfb7-bff76503a26d is in state SUCCESS 2026-01-03 00:47:16.354711 | orchestrator | 2026-01-03 00:47:16 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:47:16.358844 | orchestrator | 2026-01-03 00:47:16 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:47:16.363623 | orchestrator | 2026-01-03 00:47:16 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:47:16.368086 | orchestrator | 2026-01-03 00:47:16 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:47:16.368154 | orchestrator | 2026-01-03 00:47:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:19.416095 | orchestrator | 2026-01-03 00:47:19 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:47:19.420240 | orchestrator | 2026-01-03 00:47:19 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:47:19.425370 | orchestrator | 2026-01-03 00:47:19 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:47:19.429471 | orchestrator | 2026-01-03 00:47:19 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:47:19.431218 | orchestrator | 2026-01-03 00:47:19 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:47:19.431298 | orchestrator | 2026-01-03 00:47:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:22.473490 | orchestrator | 2026-01-03 00:47:22 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:47:22.477557 | orchestrator | 2026-01-03 00:47:22 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:47:22.482066 | orchestrator | 2026-01-03 00:47:22 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:47:22.484650 | orchestrator | 2026-01-03 00:47:22 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:47:22.487633 | orchestrator | 2026-01-03 00:47:22 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:47:22.488585 | orchestrator | 2026-01-03 00:47:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:25.550126 | orchestrator | 2026-01-03 00:47:25 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:47:25.550175 | orchestrator | 2026-01-03 00:47:25 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:47:25.550181 | orchestrator | 2026-01-03 00:47:25 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:47:25.550199 | orchestrator | 2026-01-03 00:47:25 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:47:25.552276 | orchestrator | 2026-01-03 00:47:25 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:47:25.553000 | orchestrator | 2026-01-03 00:47:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:28.689601 | orchestrator | 2026-01-03 00:47:28 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:47:28.689663 | orchestrator | 2026-01-03 00:47:28 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:47:28.690787 | orchestrator | 2026-01-03 00:47:28 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:47:28.693234 | orchestrator | 2026-01-03 00:47:28 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:47:28.694434 | orchestrator | 2026-01-03 00:47:28 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:47:28.694465 | orchestrator | 2026-01-03 00:47:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:31.727437 | orchestrator | 2026-01-03 00:47:31 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:47:31.729381 | orchestrator | 2026-01-03 00:47:31 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:47:31.733322 | orchestrator | 2026-01-03 00:47:31 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:47:31.734678 | orchestrator | 2026-01-03 00:47:31 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:47:31.737143 | orchestrator | 2026-01-03 00:47:31 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:47:31.737182 | orchestrator | 2026-01-03 00:47:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:34.805363 | orchestrator | 2026-01-03 00:47:34 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:47:34.806856 | orchestrator | 2026-01-03 00:47:34 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:47:34.808451 | orchestrator | 2026-01-03 00:47:34 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:47:34.812042 | orchestrator | 2026-01-03 00:47:34 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:47:34.812151 | orchestrator | 2026-01-03 00:47:34 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:47:34.812166 | orchestrator | 2026-01-03 00:47:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:37.887598 | orchestrator | 2026-01-03 00:47:37 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:47:37.887657 | orchestrator | 2026-01-03 00:47:37 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:47:37.888503 | orchestrator | 2026-01-03 00:47:37 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:47:37.889655 | orchestrator | 2026-01-03 00:47:37 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:47:37.890497 | orchestrator | 2026-01-03 00:47:37 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:47:37.890610 | orchestrator | 2026-01-03 00:47:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:40.933675 | orchestrator | 2026-01-03 00:47:40 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:47:40.936084 | orchestrator | 2026-01-03 00:47:40 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:47:40.938632 | orchestrator | 2026-01-03 00:47:40 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:47:40.942568 | orchestrator | 2026-01-03 00:47:40 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:47:40.944539 | orchestrator | 2026-01-03 00:47:40 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:47:40.945739 | orchestrator | 2026-01-03 00:47:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:43.989811 | orchestrator | 2026-01-03 00:47:43 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:47:43.991914 | orchestrator | 2026-01-03 00:47:43 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:47:43.993901 | orchestrator | 2026-01-03 00:47:43 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:47:43.995765 | orchestrator | 2026-01-03 00:47:43 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:47:43.995806 | orchestrator | 2026-01-03 00:47:43 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:47:43.995811 | orchestrator | 2026-01-03 00:47:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:47.038905 | orchestrator | 2026-01-03 00:47:47 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state STARTED 2026-01-03 00:47:47.040334 | orchestrator | 2026-01-03 00:47:47 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:47:47.043102 | orchestrator | 2026-01-03 00:47:47 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:47:47.044114 | orchestrator | 2026-01-03 00:47:47 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:47:47.046220 | orchestrator | 2026-01-03 00:47:47 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:47:47.046731 | orchestrator | 2026-01-03 00:47:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:50.104841 | orchestrator | 2026-01-03 00:47:50 | INFO  | Task f0c52b94-dca4-43f3-8b4c-c7ca1d5d56c0 is in state SUCCESS 2026-01-03 00:47:50.106596 | orchestrator | 2026-01-03 00:47:50.106655 | orchestrator | 2026-01-03 00:47:50.106665 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-01-03 00:47:50.106671 | orchestrator | 2026-01-03 00:47:50.106677 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-01-03 00:47:50.106684 | orchestrator | Saturday 03 January 2026 00:46:28 +0000 (0:00:00.720) 0:00:00.720 ****** 2026-01-03 00:47:50.106689 | orchestrator | ok: [testbed-manager] => { 2026-01-03 00:47:50.106696 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-01-03 00:47:50.106702 | orchestrator | } 2026-01-03 00:47:50.106707 | orchestrator | 2026-01-03 00:47:50.106713 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-01-03 00:47:50.106716 | orchestrator | Saturday 03 January 2026 00:46:29 +0000 (0:00:00.447) 0:00:01.167 ****** 2026-01-03 00:47:50.106719 | orchestrator | ok: [testbed-manager] 2026-01-03 00:47:50.106723 | orchestrator | 2026-01-03 00:47:50.106727 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-01-03 00:47:50.106730 | orchestrator | Saturday 03 January 2026 00:46:30 +0000 (0:00:01.191) 0:00:02.359 ****** 2026-01-03 00:47:50.106733 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-01-03 00:47:50.106737 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-01-03 00:47:50.106743 | orchestrator | 2026-01-03 00:47:50.106748 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-01-03 00:47:50.106768 | orchestrator | Saturday 03 January 2026 00:46:31 +0000 (0:00:01.252) 0:00:03.611 ****** 2026-01-03 00:47:50.106774 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:50.106779 | orchestrator | 2026-01-03 00:47:50.106782 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-01-03 00:47:50.106785 | orchestrator | Saturday 03 January 2026 00:46:35 +0000 (0:00:03.641) 0:00:07.252 ****** 2026-01-03 00:47:50.106788 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:50.106792 | orchestrator | 2026-01-03 00:47:50.106796 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-01-03 00:47:50.106801 | orchestrator | Saturday 03 January 2026 00:46:37 +0000 (0:00:01.757) 0:00:09.010 ****** 2026-01-03 00:47:50.106807 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-01-03 00:47:50.106829 | orchestrator | ok: [testbed-manager] 2026-01-03 00:47:50.106840 | orchestrator | 2026-01-03 00:47:50.106846 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-01-03 00:47:50.106851 | orchestrator | Saturday 03 January 2026 00:47:03 +0000 (0:00:26.467) 0:00:35.478 ****** 2026-01-03 00:47:50.106855 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:50.106861 | orchestrator | 2026-01-03 00:47:50.106866 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:47:50.106871 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:47:50.106878 | orchestrator | 2026-01-03 00:47:50.106883 | orchestrator | 2026-01-03 00:47:50.106887 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:47:50.106892 | orchestrator | Saturday 03 January 2026 00:47:06 +0000 (0:00:03.122) 0:00:38.601 ****** 2026-01-03 00:47:50.106897 | orchestrator | =============================================================================== 2026-01-03 00:47:50.106902 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.47s 2026-01-03 00:47:50.106907 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.64s 2026-01-03 00:47:50.106913 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.12s 2026-01-03 00:47:50.106918 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.76s 2026-01-03 00:47:50.106923 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.25s 2026-01-03 00:47:50.106929 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.19s 2026-01-03 00:47:50.106956 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.45s 2026-01-03 00:47:50.106959 | orchestrator | 2026-01-03 00:47:50.106962 | orchestrator | 2026-01-03 00:47:50.106966 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-01-03 00:47:50.106969 | orchestrator | 2026-01-03 00:47:50.106972 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-01-03 00:47:50.106975 | orchestrator | Saturday 03 January 2026 00:46:30 +0000 (0:00:00.206) 0:00:00.206 ****** 2026-01-03 00:47:50.106979 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-01-03 00:47:50.106983 | orchestrator | 2026-01-03 00:47:50.106986 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-01-03 00:47:50.106989 | orchestrator | Saturday 03 January 2026 00:46:30 +0000 (0:00:00.222) 0:00:00.429 ****** 2026-01-03 00:47:50.106993 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-01-03 00:47:50.106999 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-01-03 00:47:50.107004 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-01-03 00:47:50.107009 | orchestrator | 2026-01-03 00:47:50.107015 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-01-03 00:47:50.107028 | orchestrator | Saturday 03 January 2026 00:46:31 +0000 (0:00:01.436) 0:00:01.865 ****** 2026-01-03 00:47:50.107032 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:50.107037 | orchestrator | 2026-01-03 00:47:50.107042 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-01-03 00:47:50.107047 | orchestrator | Saturday 03 January 2026 00:46:34 +0000 (0:00:02.452) 0:00:04.318 ****** 2026-01-03 00:47:50.107064 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-01-03 00:47:50.107070 | orchestrator | ok: [testbed-manager] 2026-01-03 00:47:50.107074 | orchestrator | 2026-01-03 00:47:50.107077 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-01-03 00:47:50.107081 | orchestrator | Saturday 03 January 2026 00:47:09 +0000 (0:00:35.483) 0:00:39.801 ****** 2026-01-03 00:47:50.107084 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:50.107087 | orchestrator | 2026-01-03 00:47:50.107090 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-01-03 00:47:50.107093 | orchestrator | Saturday 03 January 2026 00:47:10 +0000 (0:00:00.727) 0:00:40.528 ****** 2026-01-03 00:47:50.107120 | orchestrator | ok: [testbed-manager] 2026-01-03 00:47:50.107124 | orchestrator | 2026-01-03 00:47:50.107128 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-01-03 00:47:50.107131 | orchestrator | Saturday 03 January 2026 00:47:11 +0000 (0:00:00.568) 0:00:41.096 ****** 2026-01-03 00:47:50.107135 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:50.107139 | orchestrator | 2026-01-03 00:47:50.107142 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-01-03 00:47:50.107146 | orchestrator | Saturday 03 January 2026 00:47:13 +0000 (0:00:02.049) 0:00:43.146 ****** 2026-01-03 00:47:50.107150 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:50.107153 | orchestrator | 2026-01-03 00:47:50.107157 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-01-03 00:47:50.107162 | orchestrator | Saturday 03 January 2026 00:47:13 +0000 (0:00:00.707) 0:00:43.854 ****** 2026-01-03 00:47:50.107167 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:50.107175 | orchestrator | 2026-01-03 00:47:50.107182 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-01-03 00:47:50.107187 | orchestrator | Saturday 03 January 2026 00:47:14 +0000 (0:00:00.483) 0:00:44.337 ****** 2026-01-03 00:47:50.107192 | orchestrator | ok: [testbed-manager] 2026-01-03 00:47:50.107197 | orchestrator | 2026-01-03 00:47:50.107202 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:47:50.107208 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:47:50.107214 | orchestrator | 2026-01-03 00:47:50.107219 | orchestrator | 2026-01-03 00:47:50.107224 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:47:50.107234 | orchestrator | Saturday 03 January 2026 00:47:14 +0000 (0:00:00.440) 0:00:44.777 ****** 2026-01-03 00:47:50.107239 | orchestrator | =============================================================================== 2026-01-03 00:47:50.107244 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.48s 2026-01-03 00:47:50.107250 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.45s 2026-01-03 00:47:50.107255 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.05s 2026-01-03 00:47:50.107260 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.44s 2026-01-03 00:47:50.107265 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.73s 2026-01-03 00:47:50.107270 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.71s 2026-01-03 00:47:50.107275 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.57s 2026-01-03 00:47:50.107280 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.48s 2026-01-03 00:47:50.107290 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.44s 2026-01-03 00:47:50.107296 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.22s 2026-01-03 00:47:50.107301 | orchestrator | 2026-01-03 00:47:50.107307 | orchestrator | 2026-01-03 00:47:50.107312 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:47:50.107318 | orchestrator | 2026-01-03 00:47:50.107323 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 00:47:50.107328 | orchestrator | Saturday 03 January 2026 00:46:27 +0000 (0:00:00.537) 0:00:00.537 ****** 2026-01-03 00:47:50.107333 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-01-03 00:47:50.107338 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-01-03 00:47:50.107342 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-01-03 00:47:50.107345 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-01-03 00:47:50.107348 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-01-03 00:47:50.107351 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-01-03 00:47:50.107354 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-01-03 00:47:50.107357 | orchestrator | 2026-01-03 00:47:50.107361 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-01-03 00:47:50.107364 | orchestrator | 2026-01-03 00:47:50.107367 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-01-03 00:47:50.107370 | orchestrator | Saturday 03 January 2026 00:46:30 +0000 (0:00:02.504) 0:00:03.042 ****** 2026-01-03 00:47:50.107380 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:47:50.107388 | orchestrator | 2026-01-03 00:47:50.107393 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-01-03 00:47:50.107400 | orchestrator | Saturday 03 January 2026 00:46:31 +0000 (0:00:01.493) 0:00:04.535 ****** 2026-01-03 00:47:50.107407 | orchestrator | ok: [testbed-manager] 2026-01-03 00:47:50.107412 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:47:50.107417 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:47:50.107422 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:47:50.107428 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:47:50.107442 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:47:50.107448 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:47:50.107453 | orchestrator | 2026-01-03 00:47:50.107458 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-01-03 00:47:50.107463 | orchestrator | Saturday 03 January 2026 00:46:34 +0000 (0:00:03.064) 0:00:07.600 ****** 2026-01-03 00:47:50.107468 | orchestrator | ok: [testbed-manager] 2026-01-03 00:47:50.107474 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:47:50.107477 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:47:50.107480 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:47:50.107483 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:47:50.107486 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:47:50.107489 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:47:50.107493 | orchestrator | 2026-01-03 00:47:50.107496 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-01-03 00:47:50.107499 | orchestrator | Saturday 03 January 2026 00:46:37 +0000 (0:00:03.038) 0:00:10.638 ****** 2026-01-03 00:47:50.107502 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:50.107505 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:47:50.107508 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:47:50.107511 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:47:50.107515 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:47:50.107547 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:47:50.107550 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:47:50.107558 | orchestrator | 2026-01-03 00:47:50.107561 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-01-03 00:47:50.107564 | orchestrator | Saturday 03 January 2026 00:46:40 +0000 (0:00:02.414) 0:00:13.052 ****** 2026-01-03 00:47:50.107567 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:47:50.107571 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:47:50.107574 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:47:50.107577 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:47:50.107580 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:47:50.107583 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:47:50.107586 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:50.107589 | orchestrator | 2026-01-03 00:47:50.107592 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-01-03 00:47:50.107595 | orchestrator | Saturday 03 January 2026 00:46:52 +0000 (0:00:12.886) 0:00:25.939 ****** 2026-01-03 00:47:50.107598 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:47:50.107602 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:47:50.107605 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:47:50.107608 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:47:50.107612 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:47:50.107617 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:47:50.107621 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:50.107625 | orchestrator | 2026-01-03 00:47:50.107628 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-01-03 00:47:50.107631 | orchestrator | Saturday 03 January 2026 00:47:27 +0000 (0:00:34.739) 0:01:00.679 ****** 2026-01-03 00:47:50.107635 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:47:50.107639 | orchestrator | 2026-01-03 00:47:50.107642 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-01-03 00:47:50.107645 | orchestrator | Saturday 03 January 2026 00:47:29 +0000 (0:00:01.357) 0:01:02.037 ****** 2026-01-03 00:47:50.107648 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-01-03 00:47:50.107652 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-01-03 00:47:50.107655 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-01-03 00:47:50.107658 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-01-03 00:47:50.107661 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-01-03 00:47:50.107664 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-01-03 00:47:50.107667 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-01-03 00:47:50.107670 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-01-03 00:47:50.107673 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-01-03 00:47:50.107677 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-01-03 00:47:50.107686 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-01-03 00:47:50.107689 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-01-03 00:47:50.107692 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-01-03 00:47:50.107695 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-01-03 00:47:50.107699 | orchestrator | 2026-01-03 00:47:50.107702 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-01-03 00:47:50.107706 | orchestrator | Saturday 03 January 2026 00:47:33 +0000 (0:00:04.414) 0:01:06.451 ****** 2026-01-03 00:47:50.107709 | orchestrator | ok: [testbed-manager] 2026-01-03 00:47:50.107712 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:47:50.107715 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:47:50.107718 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:47:50.107721 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:47:50.107724 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:47:50.107728 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:47:50.107733 | orchestrator | 2026-01-03 00:47:50.107736 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-01-03 00:47:50.107740 | orchestrator | Saturday 03 January 2026 00:47:34 +0000 (0:00:01.295) 0:01:07.747 ****** 2026-01-03 00:47:50.107743 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:50.107746 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:47:50.107824 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:47:50.107832 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:47:50.107836 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:47:50.107841 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:47:50.107847 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:47:50.107851 | orchestrator | 2026-01-03 00:47:50.107864 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-01-03 00:47:50.107898 | orchestrator | Saturday 03 January 2026 00:47:36 +0000 (0:00:01.437) 0:01:09.184 ****** 2026-01-03 00:47:50.107903 | orchestrator | ok: [testbed-manager] 2026-01-03 00:47:50.107906 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:47:50.107909 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:47:50.107913 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:47:50.107916 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:47:50.107919 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:47:50.107922 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:47:50.107927 | orchestrator | 2026-01-03 00:47:50.107932 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-01-03 00:47:50.107938 | orchestrator | Saturday 03 January 2026 00:47:37 +0000 (0:00:01.403) 0:01:10.587 ****** 2026-01-03 00:47:50.107943 | orchestrator | ok: [testbed-manager] 2026-01-03 00:47:50.107948 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:47:50.107953 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:47:50.107958 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:47:50.107963 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:47:50.107968 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:47:50.107971 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:47:50.107974 | orchestrator | 2026-01-03 00:47:50.107977 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-01-03 00:47:50.107981 | orchestrator | Saturday 03 January 2026 00:47:39 +0000 (0:00:02.154) 0:01:12.742 ****** 2026-01-03 00:47:50.107984 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-01-03 00:47:50.107989 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:47:50.107999 | orchestrator | 2026-01-03 00:47:50.108006 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-01-03 00:47:50.108011 | orchestrator | Saturday 03 January 2026 00:47:41 +0000 (0:00:01.627) 0:01:14.370 ****** 2026-01-03 00:47:50.108016 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:50.108021 | orchestrator | 2026-01-03 00:47:50.108026 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-01-03 00:47:50.108032 | orchestrator | Saturday 03 January 2026 00:47:43 +0000 (0:00:02.398) 0:01:16.768 ****** 2026-01-03 00:47:50.108037 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:47:50.108042 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:47:50.108051 | orchestrator | changed: [testbed-manager] 2026-01-03 00:47:50.108056 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:47:50.108061 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:47:50.108067 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:47:50.108072 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:47:50.108077 | orchestrator | 2026-01-03 00:47:50.108082 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:47:50.108087 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:47:50.108100 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:47:50.108103 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:47:50.108108 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:47:50.108113 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:47:50.108119 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:47:50.108124 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:47:50.108129 | orchestrator | 2026-01-03 00:47:50.108135 | orchestrator | 2026-01-03 00:47:50.108140 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:47:50.108146 | orchestrator | Saturday 03 January 2026 00:47:46 +0000 (0:00:02.951) 0:01:19.720 ****** 2026-01-03 00:47:50.108149 | orchestrator | =============================================================================== 2026-01-03 00:47:50.108152 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 34.74s 2026-01-03 00:47:50.108156 | orchestrator | osism.services.netdata : Add repository -------------------------------- 12.89s 2026-01-03 00:47:50.108159 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.41s 2026-01-03 00:47:50.108162 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.06s 2026-01-03 00:47:50.108165 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.04s 2026-01-03 00:47:50.108168 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.95s 2026-01-03 00:47:50.108171 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.50s 2026-01-03 00:47:50.108174 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.41s 2026-01-03 00:47:50.108177 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.40s 2026-01-03 00:47:50.108181 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.15s 2026-01-03 00:47:50.108186 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.63s 2026-01-03 00:47:50.108199 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.49s 2026-01-03 00:47:50.108204 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.44s 2026-01-03 00:47:50.108208 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.40s 2026-01-03 00:47:50.108213 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.36s 2026-01-03 00:47:50.108218 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.30s 2026-01-03 00:47:50.108223 | orchestrator | 2026-01-03 00:47:50 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:47:50.108228 | orchestrator | 2026-01-03 00:47:50 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:47:50.108890 | orchestrator | 2026-01-03 00:47:50 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:47:50.110253 | orchestrator | 2026-01-03 00:47:50 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:47:50.110286 | orchestrator | 2026-01-03 00:47:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:53.151270 | orchestrator | 2026-01-03 00:47:53 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:47:53.151367 | orchestrator | 2026-01-03 00:47:53 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:47:53.151376 | orchestrator | 2026-01-03 00:47:53 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:47:53.151485 | orchestrator | 2026-01-03 00:47:53 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:47:53.152269 | orchestrator | 2026-01-03 00:47:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:56.195952 | orchestrator | 2026-01-03 00:47:56 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state STARTED 2026-01-03 00:47:56.198377 | orchestrator | 2026-01-03 00:47:56 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:47:56.200491 | orchestrator | 2026-01-03 00:47:56 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:47:56.203251 | orchestrator | 2026-01-03 00:47:56 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:47:56.203304 | orchestrator | 2026-01-03 00:47:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:47:59.360736 | orchestrator | 2026-01-03 00:47:59 | INFO  | Task a99367c5-d100-43ee-b57c-82534bf6a9a3 is in state SUCCESS 2026-01-03 00:47:59.365611 | orchestrator | 2026-01-03 00:47:59 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:47:59.368868 | orchestrator | 2026-01-03 00:47:59 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:47:59.370857 | orchestrator | 2026-01-03 00:47:59 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:47:59.370972 | orchestrator | 2026-01-03 00:47:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:02.416275 | orchestrator | 2026-01-03 00:48:02 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:48:02.417824 | orchestrator | 2026-01-03 00:48:02 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:48:02.419814 | orchestrator | 2026-01-03 00:48:02 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:48:02.419886 | orchestrator | 2026-01-03 00:48:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:05.475930 | orchestrator | 2026-01-03 00:48:05 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:48:05.477552 | orchestrator | 2026-01-03 00:48:05 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:48:05.479046 | orchestrator | 2026-01-03 00:48:05 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:48:05.479093 | orchestrator | 2026-01-03 00:48:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:08.522260 | orchestrator | 2026-01-03 00:48:08 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:48:08.523890 | orchestrator | 2026-01-03 00:48:08 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:48:08.525108 | orchestrator | 2026-01-03 00:48:08 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:48:08.525166 | orchestrator | 2026-01-03 00:48:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:11.563814 | orchestrator | 2026-01-03 00:48:11 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:48:11.565152 | orchestrator | 2026-01-03 00:48:11 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:48:11.567077 | orchestrator | 2026-01-03 00:48:11 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:48:11.567183 | orchestrator | 2026-01-03 00:48:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:14.617028 | orchestrator | 2026-01-03 00:48:14 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:48:14.620645 | orchestrator | 2026-01-03 00:48:14 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:48:14.623110 | orchestrator | 2026-01-03 00:48:14 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:48:14.623339 | orchestrator | 2026-01-03 00:48:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:17.687604 | orchestrator | 2026-01-03 00:48:17 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:48:17.689944 | orchestrator | 2026-01-03 00:48:17 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:48:17.692108 | orchestrator | 2026-01-03 00:48:17 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:48:17.692160 | orchestrator | 2026-01-03 00:48:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:20.745393 | orchestrator | 2026-01-03 00:48:20 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:48:20.746775 | orchestrator | 2026-01-03 00:48:20 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:48:20.746829 | orchestrator | 2026-01-03 00:48:20 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:48:20.746837 | orchestrator | 2026-01-03 00:48:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:23.806233 | orchestrator | 2026-01-03 00:48:23 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:48:23.806713 | orchestrator | 2026-01-03 00:48:23 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:48:23.808061 | orchestrator | 2026-01-03 00:48:23 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:48:23.808572 | orchestrator | 2026-01-03 00:48:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:26.855047 | orchestrator | 2026-01-03 00:48:26 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:48:26.856651 | orchestrator | 2026-01-03 00:48:26 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:48:26.858579 | orchestrator | 2026-01-03 00:48:26 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:48:26.858629 | orchestrator | 2026-01-03 00:48:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:29.901407 | orchestrator | 2026-01-03 00:48:29 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:48:29.903904 | orchestrator | 2026-01-03 00:48:29 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:48:29.905658 | orchestrator | 2026-01-03 00:48:29 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:48:29.905718 | orchestrator | 2026-01-03 00:48:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:32.957669 | orchestrator | 2026-01-03 00:48:32 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:48:32.959039 | orchestrator | 2026-01-03 00:48:32 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:48:32.961791 | orchestrator | 2026-01-03 00:48:32 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:48:32.961858 | orchestrator | 2026-01-03 00:48:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:35.991839 | orchestrator | 2026-01-03 00:48:35 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:48:35.992434 | orchestrator | 2026-01-03 00:48:35 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:48:35.993548 | orchestrator | 2026-01-03 00:48:35 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:48:35.993569 | orchestrator | 2026-01-03 00:48:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:39.033200 | orchestrator | 2026-01-03 00:48:39 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:48:39.036245 | orchestrator | 2026-01-03 00:48:39 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:48:39.038055 | orchestrator | 2026-01-03 00:48:39 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:48:39.038110 | orchestrator | 2026-01-03 00:48:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:42.084985 | orchestrator | 2026-01-03 00:48:42 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:48:42.087062 | orchestrator | 2026-01-03 00:48:42 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:48:42.088041 | orchestrator | 2026-01-03 00:48:42 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:48:42.088085 | orchestrator | 2026-01-03 00:48:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:45.132892 | orchestrator | 2026-01-03 00:48:45 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:48:45.133612 | orchestrator | 2026-01-03 00:48:45 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:48:45.134508 | orchestrator | 2026-01-03 00:48:45 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:48:45.134530 | orchestrator | 2026-01-03 00:48:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:48.172579 | orchestrator | 2026-01-03 00:48:48 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:48:48.173072 | orchestrator | 2026-01-03 00:48:48 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:48:48.174817 | orchestrator | 2026-01-03 00:48:48 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:48:48.174861 | orchestrator | 2026-01-03 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:51.211557 | orchestrator | 2026-01-03 00:48:51 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:48:51.213973 | orchestrator | 2026-01-03 00:48:51 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:48:51.223580 | orchestrator | 2026-01-03 00:48:51 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:48:51.223667 | orchestrator | 2026-01-03 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:54.258812 | orchestrator | 2026-01-03 00:48:54 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:48:54.260391 | orchestrator | 2026-01-03 00:48:54 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:48:54.261501 | orchestrator | 2026-01-03 00:48:54 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:48:54.262643 | orchestrator | 2026-01-03 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:48:57.298585 | orchestrator | 2026-01-03 00:48:57 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:48:57.299332 | orchestrator | 2026-01-03 00:48:57 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:48:57.301087 | orchestrator | 2026-01-03 00:48:57 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state STARTED 2026-01-03 00:48:57.301142 | orchestrator | 2026-01-03 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:00.360382 | orchestrator | 2026-01-03 00:49:00 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:49:00.360847 | orchestrator | 2026-01-03 00:49:00 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:49:00.371854 | orchestrator | 2026-01-03 00:49:00 | INFO  | Task 09ec3e60-5423-4cf1-a842-1417321622de is in state SUCCESS 2026-01-03 00:49:00.371923 | orchestrator | 2026-01-03 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:00.374136 | orchestrator | 2026-01-03 00:49:00.374198 | orchestrator | 2026-01-03 00:49:00.374208 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-01-03 00:49:00.374298 | orchestrator | 2026-01-03 00:49:00.374307 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-01-03 00:49:00.374314 | orchestrator | Saturday 03 January 2026 00:46:45 +0000 (0:00:00.187) 0:00:00.187 ****** 2026-01-03 00:49:00.374322 | orchestrator | ok: [testbed-manager] 2026-01-03 00:49:00.374330 | orchestrator | 2026-01-03 00:49:00.374337 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-01-03 00:49:00.374345 | orchestrator | Saturday 03 January 2026 00:46:47 +0000 (0:00:02.049) 0:00:02.237 ****** 2026-01-03 00:49:00.374352 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-01-03 00:49:00.374360 | orchestrator | 2026-01-03 00:49:00.374367 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-01-03 00:49:00.374374 | orchestrator | Saturday 03 January 2026 00:46:48 +0000 (0:00:00.622) 0:00:02.860 ****** 2026-01-03 00:49:00.374382 | orchestrator | changed: [testbed-manager] 2026-01-03 00:49:00.374389 | orchestrator | 2026-01-03 00:49:00.374396 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-01-03 00:49:00.374403 | orchestrator | Saturday 03 January 2026 00:46:49 +0000 (0:00:01.421) 0:00:04.281 ****** 2026-01-03 00:49:00.374410 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-01-03 00:49:00.374418 | orchestrator | ok: [testbed-manager] 2026-01-03 00:49:00.374425 | orchestrator | 2026-01-03 00:49:00.374433 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-01-03 00:49:00.374440 | orchestrator | Saturday 03 January 2026 00:47:53 +0000 (0:01:03.634) 0:01:07.915 ****** 2026-01-03 00:49:00.374447 | orchestrator | changed: [testbed-manager] 2026-01-03 00:49:00.374469 | orchestrator | 2026-01-03 00:49:00.374476 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:49:00.374481 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:49:00.374489 | orchestrator | 2026-01-03 00:49:00.374495 | orchestrator | 2026-01-03 00:49:00.374501 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:49:00.374507 | orchestrator | Saturday 03 January 2026 00:47:58 +0000 (0:00:04.768) 0:01:12.683 ****** 2026-01-03 00:49:00.375146 | orchestrator | =============================================================================== 2026-01-03 00:49:00.375170 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 63.63s 2026-01-03 00:49:00.375178 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.77s 2026-01-03 00:49:00.375185 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 2.05s 2026-01-03 00:49:00.375208 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.42s 2026-01-03 00:49:00.375214 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.62s 2026-01-03 00:49:00.375220 | orchestrator | 2026-01-03 00:49:00.375227 | orchestrator | 2026-01-03 00:49:00.375239 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-03 00:49:00.375245 | orchestrator | 2026-01-03 00:49:00.375252 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-03 00:49:00.375259 | orchestrator | Saturday 03 January 2026 00:46:20 +0000 (0:00:00.220) 0:00:00.220 ****** 2026-01-03 00:49:00.375267 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:49:00.375275 | orchestrator | 2026-01-03 00:49:00.375282 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-03 00:49:00.375289 | orchestrator | Saturday 03 January 2026 00:46:22 +0000 (0:00:01.186) 0:00:01.407 ****** 2026-01-03 00:49:00.375296 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-03 00:49:00.375303 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-03 00:49:00.375310 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-03 00:49:00.375317 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-03 00:49:00.375324 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-03 00:49:00.375330 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-03 00:49:00.375337 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-03 00:49:00.375344 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-03 00:49:00.375350 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-03 00:49:00.375356 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-03 00:49:00.375363 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-03 00:49:00.375370 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-03 00:49:00.375376 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-03 00:49:00.375383 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-03 00:49:00.375389 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-03 00:49:00.375396 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-03 00:49:00.375434 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-03 00:49:00.375442 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-03 00:49:00.375449 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-03 00:49:00.375499 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-03 00:49:00.375506 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-03 00:49:00.375511 | orchestrator | 2026-01-03 00:49:00.375517 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-03 00:49:00.375523 | orchestrator | Saturday 03 January 2026 00:46:26 +0000 (0:00:04.062) 0:00:05.469 ****** 2026-01-03 00:49:00.375529 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:49:00.375537 | orchestrator | 2026-01-03 00:49:00.375551 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-03 00:49:00.375559 | orchestrator | Saturday 03 January 2026 00:46:27 +0000 (0:00:01.214) 0:00:06.684 ****** 2026-01-03 00:49:00.375569 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.375579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.375589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.375596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.375603 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.375630 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.375638 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.375650 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.375657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.375665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.375677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.375685 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.375693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.375724 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.375736 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.375743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.375750 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.375760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.375766 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.375773 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.375779 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.375785 | orchestrator | 2026-01-03 00:49:00.375792 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-03 00:49:00.375799 | orchestrator | Saturday 03 January 2026 00:46:31 +0000 (0:00:04.464) 0:00:11.149 ****** 2026-01-03 00:49:00.375821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:49:00.375834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.375841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.375847 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:49:00.375857 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.375864 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:00.375871 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.375878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:49:00.375885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.375902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.375909 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:49:00.375916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:49:00.375923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.375930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.375937 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:00.375946 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:49:00.375954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.375961 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.375968 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:00.375975 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:00.375986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:49:00.375999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.376006 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.376013 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:00.376020 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:49:00.376030 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.376037 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.376044 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:00.376051 | orchestrator | 2026-01-03 00:49:00.376058 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-03 00:49:00.376065 | orchestrator | Saturday 03 January 2026 00:46:33 +0000 (0:00:01.529) 0:00:12.679 ****** 2026-01-03 00:49:00.376072 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:49:00.376083 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.376093 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.376100 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:49:00.376106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:49:00.376114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.376120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.376126 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:00.376134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:49:00.376141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.376155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.376162 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:00.376169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:49:00.376180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.376188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.376195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:49:00.376202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.376213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.376220 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:00.376227 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:00.376234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:49:00.376245 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.376256 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.376263 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:00.376270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-03 00:49:00.376277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.376285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.376292 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:00.376299 | orchestrator | 2026-01-03 00:49:00.376306 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-03 00:49:00.376313 | orchestrator | Saturday 03 January 2026 00:46:35 +0000 (0:00:02.333) 0:00:15.012 ****** 2026-01-03 00:49:00.376320 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:49:00.376327 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:00.376334 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:00.376341 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:00.376348 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:00.376355 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:00.376362 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:00.376372 | orchestrator | 2026-01-03 00:49:00.376379 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-03 00:49:00.376386 | orchestrator | Saturday 03 January 2026 00:46:36 +0000 (0:00:00.994) 0:00:16.006 ****** 2026-01-03 00:49:00.376393 | orchestrator | skipping: [testbed-manager] 2026-01-03 00:49:00.376399 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:49:00.376406 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:49:00.376413 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:49:00.376419 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:49:00.376426 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:49:00.376432 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:49:00.376439 | orchestrator | 2026-01-03 00:49:00.376445 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-03 00:49:00.376466 | orchestrator | Saturday 03 January 2026 00:46:37 +0000 (0:00:00.948) 0:00:16.954 ****** 2026-01-03 00:49:00.376473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.376479 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.376493 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.376503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.376509 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.376515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.376529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.376537 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.376544 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.376551 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.376562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.376569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.376576 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.376591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.376598 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.376605 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.376612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.376627 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.376634 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.376641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.376648 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.376659 | orchestrator | 2026-01-03 00:49:00.376666 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-03 00:49:00.376673 | orchestrator | Saturday 03 January 2026 00:46:44 +0000 (0:00:06.660) 0:00:23.615 ****** 2026-01-03 00:49:00.376680 | orchestrator | [WARNING]: Skipped 2026-01-03 00:49:00.376687 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-03 00:49:00.376693 | orchestrator | to this access issue: 2026-01-03 00:49:00.376700 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-03 00:49:00.376707 | orchestrator | directory 2026-01-03 00:49:00.376714 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-03 00:49:00.376721 | orchestrator | 2026-01-03 00:49:00.376728 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-03 00:49:00.376735 | orchestrator | Saturday 03 January 2026 00:46:46 +0000 (0:00:02.064) 0:00:25.679 ****** 2026-01-03 00:49:00.376741 | orchestrator | [WARNING]: Skipped 2026-01-03 00:49:00.376748 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-03 00:49:00.376755 | orchestrator | to this access issue: 2026-01-03 00:49:00.376764 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-03 00:49:00.376771 | orchestrator | directory 2026-01-03 00:49:00.376778 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-03 00:49:00.376784 | orchestrator | 2026-01-03 00:49:00.376791 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-03 00:49:00.376798 | orchestrator | Saturday 03 January 2026 00:46:47 +0000 (0:00:00.863) 0:00:26.543 ****** 2026-01-03 00:49:00.376804 | orchestrator | [WARNING]: Skipped 2026-01-03 00:49:00.376811 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-03 00:49:00.376818 | orchestrator | to this access issue: 2026-01-03 00:49:00.376824 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-03 00:49:00.376831 | orchestrator | directory 2026-01-03 00:49:00.376838 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-03 00:49:00.376844 | orchestrator | 2026-01-03 00:49:00.376851 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-03 00:49:00.376858 | orchestrator | Saturday 03 January 2026 00:46:48 +0000 (0:00:00.915) 0:00:27.458 ****** 2026-01-03 00:49:00.376864 | orchestrator | [WARNING]: Skipped 2026-01-03 00:49:00.376871 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-03 00:49:00.376878 | orchestrator | to this access issue: 2026-01-03 00:49:00.376884 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-03 00:49:00.376891 | orchestrator | directory 2026-01-03 00:49:00.376898 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-03 00:49:00.376905 | orchestrator | 2026-01-03 00:49:00.376911 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-03 00:49:00.376918 | orchestrator | Saturday 03 January 2026 00:46:49 +0000 (0:00:01.067) 0:00:28.526 ****** 2026-01-03 00:49:00.376925 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:49:00.376932 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:00.376938 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:00.376945 | orchestrator | changed: [testbed-manager] 2026-01-03 00:49:00.376952 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:00.376958 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:49:00.376965 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:49:00.376972 | orchestrator | 2026-01-03 00:49:00.376979 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-03 00:49:00.376985 | orchestrator | Saturday 03 January 2026 00:46:53 +0000 (0:00:04.190) 0:00:32.716 ****** 2026-01-03 00:49:00.376992 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-03 00:49:00.377003 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-03 00:49:00.377010 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-03 00:49:00.377020 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-03 00:49:00.377027 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-03 00:49:00.377034 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-03 00:49:00.377041 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-03 00:49:00.377047 | orchestrator | 2026-01-03 00:49:00.377054 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-03 00:49:00.377060 | orchestrator | Saturday 03 January 2026 00:46:56 +0000 (0:00:02.758) 0:00:35.474 ****** 2026-01-03 00:49:00.377067 | orchestrator | changed: [testbed-manager] 2026-01-03 00:49:00.377074 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:00.377081 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:00.377087 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:00.377094 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:49:00.377101 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:49:00.377107 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:49:00.377114 | orchestrator | 2026-01-03 00:49:00.377121 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-03 00:49:00.377127 | orchestrator | Saturday 03 January 2026 00:46:58 +0000 (0:00:02.677) 0:00:38.152 ****** 2026-01-03 00:49:00.377135 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.377145 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.377152 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.377159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.377170 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.377181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.377188 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.377195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.377203 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.377212 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.377221 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.377237 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.377244 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.377254 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.377261 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.377268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.377276 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.377285 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.377292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:49:00.377303 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.377310 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.377317 | orchestrator | 2026-01-03 00:49:00.377324 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-03 00:49:00.377331 | orchestrator | Saturday 03 January 2026 00:47:01 +0000 (0:00:02.751) 0:00:40.903 ****** 2026-01-03 00:49:00.377338 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-03 00:49:00.377345 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-03 00:49:00.377351 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-03 00:49:00.377363 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-03 00:49:00.377370 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-03 00:49:00.377376 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-03 00:49:00.377383 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-03 00:49:00.377390 | orchestrator | 2026-01-03 00:49:00.377397 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-03 00:49:00.377403 | orchestrator | Saturday 03 January 2026 00:47:04 +0000 (0:00:02.966) 0:00:43.870 ****** 2026-01-03 00:49:00.377410 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-03 00:49:00.377417 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-03 00:49:00.377424 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-03 00:49:00.377430 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-03 00:49:00.377437 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-03 00:49:00.377443 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-03 00:49:00.377450 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-03 00:49:00.377469 | orchestrator | 2026-01-03 00:49:00.377476 | orchestrator | TASK [common : Check common containers] **************************************** 2026-01-03 00:49:00.377483 | orchestrator | Saturday 03 January 2026 00:47:07 +0000 (0:00:02.870) 0:00:46.740 ****** 2026-01-03 00:49:00.377501 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.377511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.377523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.377530 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.377537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.377548 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.377555 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.377562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.377569 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-03 00:49:00.377579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.377586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.377593 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.377609 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.377616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.377623 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.377631 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.377648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.377655 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.377663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.377670 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.377677 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:49:00.377683 | orchestrator | 2026-01-03 00:49:00.377693 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-01-03 00:49:00.377699 | orchestrator | Saturday 03 January 2026 00:47:10 +0000 (0:00:03.080) 0:00:49.820 ****** 2026-01-03 00:49:00.377705 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:00.377711 | orchestrator | changed: [testbed-manager] 2026-01-03 00:49:00.377718 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:00.377724 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:49:00.377731 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:00.377736 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:49:00.377742 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:49:00.377748 | orchestrator | 2026-01-03 00:49:00.377754 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-01-03 00:49:00.377760 | orchestrator | Saturday 03 January 2026 00:47:12 +0000 (0:00:01.745) 0:00:51.565 ****** 2026-01-03 00:49:00.377767 | orchestrator | changed: [testbed-manager] 2026-01-03 00:49:00.377773 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:00.377780 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:00.377787 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:00.377793 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:49:00.377800 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:49:00.377807 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:49:00.377818 | orchestrator | 2026-01-03 00:49:00.377825 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-03 00:49:00.377831 | orchestrator | Saturday 03 January 2026 00:47:13 +0000 (0:00:01.284) 0:00:52.850 ****** 2026-01-03 00:49:00.377841 | orchestrator | 2026-01-03 00:49:00.377847 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-03 00:49:00.377854 | orchestrator | Saturday 03 January 2026 00:47:13 +0000 (0:00:00.063) 0:00:52.914 ****** 2026-01-03 00:49:00.377861 | orchestrator | 2026-01-03 00:49:00.377868 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-03 00:49:00.377874 | orchestrator | Saturday 03 January 2026 00:47:13 +0000 (0:00:00.068) 0:00:52.982 ****** 2026-01-03 00:49:00.377881 | orchestrator | 2026-01-03 00:49:00.377888 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-03 00:49:00.377894 | orchestrator | Saturday 03 January 2026 00:47:13 +0000 (0:00:00.077) 0:00:53.060 ****** 2026-01-03 00:49:00.377901 | orchestrator | 2026-01-03 00:49:00.377908 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-03 00:49:00.377914 | orchestrator | Saturday 03 January 2026 00:47:13 +0000 (0:00:00.178) 0:00:53.239 ****** 2026-01-03 00:49:00.377921 | orchestrator | 2026-01-03 00:49:00.377928 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-03 00:49:00.377935 | orchestrator | Saturday 03 January 2026 00:47:13 +0000 (0:00:00.073) 0:00:53.312 ****** 2026-01-03 00:49:00.377942 | orchestrator | 2026-01-03 00:49:00.377948 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-03 00:49:00.377955 | orchestrator | Saturday 03 January 2026 00:47:13 +0000 (0:00:00.059) 0:00:53.371 ****** 2026-01-03 00:49:00.377962 | orchestrator | 2026-01-03 00:49:00.377969 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-03 00:49:00.377975 | orchestrator | Saturday 03 January 2026 00:47:14 +0000 (0:00:00.081) 0:00:53.453 ****** 2026-01-03 00:49:00.377982 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:00.377992 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:00.377999 | orchestrator | changed: [testbed-manager] 2026-01-03 00:49:00.378006 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:00.378045 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:49:00.378053 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:49:00.378060 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:49:00.378066 | orchestrator | 2026-01-03 00:49:00.378072 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-01-03 00:49:00.378080 | orchestrator | Saturday 03 January 2026 00:48:11 +0000 (0:00:57.639) 0:01:51.092 ****** 2026-01-03 00:49:00.378087 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:00.378094 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:49:00.378101 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:49:00.378108 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:00.378115 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:00.378121 | orchestrator | changed: [testbed-manager] 2026-01-03 00:49:00.378128 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:49:00.378135 | orchestrator | 2026-01-03 00:49:00.378141 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-01-03 00:49:00.378149 | orchestrator | Saturday 03 January 2026 00:48:46 +0000 (0:00:35.275) 0:02:26.367 ****** 2026-01-03 00:49:00.378155 | orchestrator | ok: [testbed-manager] 2026-01-03 00:49:00.378163 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:49:00.378170 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:49:00.378177 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:49:00.378184 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:49:00.378191 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:49:00.378198 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:49:00.378205 | orchestrator | 2026-01-03 00:49:00.378212 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-01-03 00:49:00.378219 | orchestrator | Saturday 03 January 2026 00:48:49 +0000 (0:00:02.129) 0:02:28.497 ****** 2026-01-03 00:49:00.378231 | orchestrator | changed: [testbed-manager] 2026-01-03 00:49:00.378237 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:00.378244 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:00.378251 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:00.378258 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:49:00.378265 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:49:00.378272 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:49:00.378278 | orchestrator | 2026-01-03 00:49:00.378285 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:49:00.378293 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-03 00:49:00.378301 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-03 00:49:00.378314 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-03 00:49:00.378321 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-03 00:49:00.378327 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-03 00:49:00.378333 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-03 00:49:00.378341 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-03 00:49:00.378348 | orchestrator | 2026-01-03 00:49:00.378355 | orchestrator | 2026-01-03 00:49:00.378362 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:49:00.378368 | orchestrator | Saturday 03 January 2026 00:48:59 +0000 (0:00:09.990) 0:02:38.488 ****** 2026-01-03 00:49:00.378374 | orchestrator | =============================================================================== 2026-01-03 00:49:00.378381 | orchestrator | common : Restart fluentd container ------------------------------------- 57.64s 2026-01-03 00:49:00.378388 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 35.28s 2026-01-03 00:49:00.378394 | orchestrator | common : Restart cron container ----------------------------------------- 9.99s 2026-01-03 00:49:00.378401 | orchestrator | common : Copying over config.json files for services -------------------- 6.66s 2026-01-03 00:49:00.378416 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.46s 2026-01-03 00:49:00.378422 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.19s 2026-01-03 00:49:00.378428 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.06s 2026-01-03 00:49:00.378434 | orchestrator | common : Check common containers ---------------------------------------- 3.08s 2026-01-03 00:49:00.378440 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.97s 2026-01-03 00:49:00.378446 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.87s 2026-01-03 00:49:00.378470 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.76s 2026-01-03 00:49:00.378477 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.75s 2026-01-03 00:49:00.378484 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.68s 2026-01-03 00:49:00.378491 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.33s 2026-01-03 00:49:00.378501 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.13s 2026-01-03 00:49:00.378507 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.06s 2026-01-03 00:49:00.378518 | orchestrator | common : Creating log volume -------------------------------------------- 1.75s 2026-01-03 00:49:00.378524 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.53s 2026-01-03 00:49:00.378530 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.28s 2026-01-03 00:49:00.378536 | orchestrator | common : include_tasks -------------------------------------------------- 1.21s 2026-01-03 00:49:03.413755 | orchestrator | 2026-01-03 00:49:03 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:49:03.413827 | orchestrator | 2026-01-03 00:49:03 | INFO  | Task a268fd38-997b-47df-bdd6-135954f5eb54 is in state STARTED 2026-01-03 00:49:03.414274 | orchestrator | 2026-01-03 00:49:03 | INFO  | Task 908639fc-48ff-4998-89b1-0e0cf565af15 is in state STARTED 2026-01-03 00:49:03.414872 | orchestrator | 2026-01-03 00:49:03 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:49:03.415605 | orchestrator | 2026-01-03 00:49:03 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state STARTED 2026-01-03 00:49:03.416016 | orchestrator | 2026-01-03 00:49:03 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:49:03.416041 | orchestrator | 2026-01-03 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:06.458302 | orchestrator | 2026-01-03 00:49:06 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:49:06.458400 | orchestrator | 2026-01-03 00:49:06 | INFO  | Task a268fd38-997b-47df-bdd6-135954f5eb54 is in state STARTED 2026-01-03 00:49:06.459041 | orchestrator | 2026-01-03 00:49:06 | INFO  | Task 908639fc-48ff-4998-89b1-0e0cf565af15 is in state STARTED 2026-01-03 00:49:06.459903 | orchestrator | 2026-01-03 00:49:06 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:49:06.459979 | orchestrator | 2026-01-03 00:49:06 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state STARTED 2026-01-03 00:49:06.460638 | orchestrator | 2026-01-03 00:49:06 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:49:06.460660 | orchestrator | 2026-01-03 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:09.539621 | orchestrator | 2026-01-03 00:49:09 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:49:09.539668 | orchestrator | 2026-01-03 00:49:09 | INFO  | Task a268fd38-997b-47df-bdd6-135954f5eb54 is in state STARTED 2026-01-03 00:49:09.539674 | orchestrator | 2026-01-03 00:49:09 | INFO  | Task 908639fc-48ff-4998-89b1-0e0cf565af15 is in state STARTED 2026-01-03 00:49:09.539678 | orchestrator | 2026-01-03 00:49:09 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:49:09.539683 | orchestrator | 2026-01-03 00:49:09 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state STARTED 2026-01-03 00:49:09.539687 | orchestrator | 2026-01-03 00:49:09 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:49:09.539691 | orchestrator | 2026-01-03 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:12.535005 | orchestrator | 2026-01-03 00:49:12 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:49:12.535492 | orchestrator | 2026-01-03 00:49:12 | INFO  | Task a268fd38-997b-47df-bdd6-135954f5eb54 is in state STARTED 2026-01-03 00:49:12.535858 | orchestrator | 2026-01-03 00:49:12 | INFO  | Task 908639fc-48ff-4998-89b1-0e0cf565af15 is in state STARTED 2026-01-03 00:49:12.536629 | orchestrator | 2026-01-03 00:49:12 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:49:12.537058 | orchestrator | 2026-01-03 00:49:12 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state STARTED 2026-01-03 00:49:12.537663 | orchestrator | 2026-01-03 00:49:12 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:49:12.537776 | orchestrator | 2026-01-03 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:15.648309 | orchestrator | 2026-01-03 00:49:15 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:49:15.648365 | orchestrator | 2026-01-03 00:49:15 | INFO  | Task a268fd38-997b-47df-bdd6-135954f5eb54 is in state STARTED 2026-01-03 00:49:15.648373 | orchestrator | 2026-01-03 00:49:15 | INFO  | Task 908639fc-48ff-4998-89b1-0e0cf565af15 is in state STARTED 2026-01-03 00:49:15.648388 | orchestrator | 2026-01-03 00:49:15 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:49:15.648393 | orchestrator | 2026-01-03 00:49:15 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state STARTED 2026-01-03 00:49:15.648398 | orchestrator | 2026-01-03 00:49:15 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:49:15.648403 | orchestrator | 2026-01-03 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:18.685177 | orchestrator | 2026-01-03 00:49:18 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:49:18.685242 | orchestrator | 2026-01-03 00:49:18 | INFO  | Task a268fd38-997b-47df-bdd6-135954f5eb54 is in state STARTED 2026-01-03 00:49:18.685253 | orchestrator | 2026-01-03 00:49:18 | INFO  | Task 908639fc-48ff-4998-89b1-0e0cf565af15 is in state STARTED 2026-01-03 00:49:18.685260 | orchestrator | 2026-01-03 00:49:18 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:49:18.685267 | orchestrator | 2026-01-03 00:49:18 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state STARTED 2026-01-03 00:49:18.685274 | orchestrator | 2026-01-03 00:49:18 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:49:18.685281 | orchestrator | 2026-01-03 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:21.703791 | orchestrator | 2026-01-03 00:49:21 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:49:21.706296 | orchestrator | 2026-01-03 00:49:21 | INFO  | Task a268fd38-997b-47df-bdd6-135954f5eb54 is in state STARTED 2026-01-03 00:49:21.709510 | orchestrator | 2026-01-03 00:49:21 | INFO  | Task 908639fc-48ff-4998-89b1-0e0cf565af15 is in state STARTED 2026-01-03 00:49:21.710281 | orchestrator | 2026-01-03 00:49:21 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:49:21.712160 | orchestrator | 2026-01-03 00:49:21 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state STARTED 2026-01-03 00:49:21.713210 | orchestrator | 2026-01-03 00:49:21 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:49:21.713244 | orchestrator | 2026-01-03 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:24.775664 | orchestrator | 2026-01-03 00:49:24 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:49:24.778135 | orchestrator | 2026-01-03 00:49:24 | INFO  | Task a268fd38-997b-47df-bdd6-135954f5eb54 is in state STARTED 2026-01-03 00:49:24.778517 | orchestrator | 2026-01-03 00:49:24 | INFO  | Task 908639fc-48ff-4998-89b1-0e0cf565af15 is in state SUCCESS 2026-01-03 00:49:24.779309 | orchestrator | 2026-01-03 00:49:24 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:49:24.782937 | orchestrator | 2026-01-03 00:49:24 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:49:24.783523 | orchestrator | 2026-01-03 00:49:24 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state STARTED 2026-01-03 00:49:24.784509 | orchestrator | 2026-01-03 00:49:24 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:49:24.784541 | orchestrator | 2026-01-03 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:27.839106 | orchestrator | 2026-01-03 00:49:27 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:49:27.841063 | orchestrator | 2026-01-03 00:49:27 | INFO  | Task a268fd38-997b-47df-bdd6-135954f5eb54 is in state STARTED 2026-01-03 00:49:27.842492 | orchestrator | 2026-01-03 00:49:27 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:49:27.844097 | orchestrator | 2026-01-03 00:49:27 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:49:27.845465 | orchestrator | 2026-01-03 00:49:27 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state STARTED 2026-01-03 00:49:27.846759 | orchestrator | 2026-01-03 00:49:27 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:49:27.846788 | orchestrator | 2026-01-03 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:30.909713 | orchestrator | 2026-01-03 00:49:30 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:49:30.909795 | orchestrator | 2026-01-03 00:49:30 | INFO  | Task a268fd38-997b-47df-bdd6-135954f5eb54 is in state STARTED 2026-01-03 00:49:30.909814 | orchestrator | 2026-01-03 00:49:30 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:49:30.909821 | orchestrator | 2026-01-03 00:49:30 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:49:30.909828 | orchestrator | 2026-01-03 00:49:30 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state STARTED 2026-01-03 00:49:30.909835 | orchestrator | 2026-01-03 00:49:30 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:49:30.909842 | orchestrator | 2026-01-03 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:33.937566 | orchestrator | 2026-01-03 00:49:33.937625 | orchestrator | 2026-01-03 00:49:33.937636 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:49:33.937645 | orchestrator | 2026-01-03 00:49:33.937652 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 00:49:33.937660 | orchestrator | Saturday 03 January 2026 00:49:07 +0000 (0:00:00.566) 0:00:00.566 ****** 2026-01-03 00:49:33.937667 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:49:33.937674 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:49:33.937682 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:49:33.937689 | orchestrator | 2026-01-03 00:49:33.937696 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 00:49:33.937703 | orchestrator | Saturday 03 January 2026 00:49:07 +0000 (0:00:00.688) 0:00:01.255 ****** 2026-01-03 00:49:33.937710 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-01-03 00:49:33.937716 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-01-03 00:49:33.937723 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-01-03 00:49:33.937729 | orchestrator | 2026-01-03 00:49:33.937736 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-01-03 00:49:33.937743 | orchestrator | 2026-01-03 00:49:33.937750 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-01-03 00:49:33.937757 | orchestrator | Saturday 03 January 2026 00:49:08 +0000 (0:00:00.834) 0:00:02.089 ****** 2026-01-03 00:49:33.937782 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:49:33.937790 | orchestrator | 2026-01-03 00:49:33.937797 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-01-03 00:49:33.937804 | orchestrator | Saturday 03 January 2026 00:49:09 +0000 (0:00:00.996) 0:00:03.086 ****** 2026-01-03 00:49:33.937810 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-03 00:49:33.937818 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-03 00:49:33.937825 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-03 00:49:33.937832 | orchestrator | 2026-01-03 00:49:33.937839 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-01-03 00:49:33.937847 | orchestrator | Saturday 03 January 2026 00:49:10 +0000 (0:00:00.860) 0:00:03.946 ****** 2026-01-03 00:49:33.937853 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-03 00:49:33.937859 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-03 00:49:33.937866 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-03 00:49:33.937872 | orchestrator | 2026-01-03 00:49:33.937879 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-01-03 00:49:33.937887 | orchestrator | Saturday 03 January 2026 00:49:12 +0000 (0:00:02.214) 0:00:06.161 ****** 2026-01-03 00:49:33.937894 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:33.937901 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:33.937908 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:33.937915 | orchestrator | 2026-01-03 00:49:33.937922 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-01-03 00:49:33.937929 | orchestrator | Saturday 03 January 2026 00:49:14 +0000 (0:00:02.035) 0:00:08.196 ****** 2026-01-03 00:49:33.937937 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:33.937944 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:33.937951 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:33.937959 | orchestrator | 2026-01-03 00:49:33.937966 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:49:33.937974 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:49:33.937982 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:49:33.937990 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:49:33.937997 | orchestrator | 2026-01-03 00:49:33.938004 | orchestrator | 2026-01-03 00:49:33.938050 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:49:33.938060 | orchestrator | Saturday 03 January 2026 00:49:21 +0000 (0:00:06.531) 0:00:14.728 ****** 2026-01-03 00:49:33.938067 | orchestrator | =============================================================================== 2026-01-03 00:49:33.938074 | orchestrator | memcached : Restart memcached container --------------------------------- 6.53s 2026-01-03 00:49:33.938081 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.21s 2026-01-03 00:49:33.938088 | orchestrator | memcached : Check memcached container ----------------------------------- 2.04s 2026-01-03 00:49:33.938095 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.00s 2026-01-03 00:49:33.938168 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.86s 2026-01-03 00:49:33.938177 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.83s 2026-01-03 00:49:33.938194 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.69s 2026-01-03 00:49:33.938201 | orchestrator | 2026-01-03 00:49:33.938207 | orchestrator | 2026-01-03 00:49:33.938213 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:49:33.938220 | orchestrator | 2026-01-03 00:49:33.938235 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 00:49:33.938243 | orchestrator | Saturday 03 January 2026 00:49:08 +0000 (0:00:00.318) 0:00:00.318 ****** 2026-01-03 00:49:33.938250 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:49:33.938258 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:49:33.938265 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:49:33.938272 | orchestrator | 2026-01-03 00:49:33.938280 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 00:49:33.938301 | orchestrator | Saturday 03 January 2026 00:49:09 +0000 (0:00:00.762) 0:00:01.081 ****** 2026-01-03 00:49:33.938309 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-01-03 00:49:33.938316 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-01-03 00:49:33.938324 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-01-03 00:49:33.938332 | orchestrator | 2026-01-03 00:49:33.938339 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-01-03 00:49:33.938346 | orchestrator | 2026-01-03 00:49:33.938353 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-01-03 00:49:33.938361 | orchestrator | Saturday 03 January 2026 00:49:10 +0000 (0:00:00.970) 0:00:02.051 ****** 2026-01-03 00:49:33.938368 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:49:33.938376 | orchestrator | 2026-01-03 00:49:33.938429 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-01-03 00:49:33.938439 | orchestrator | Saturday 03 January 2026 00:49:10 +0000 (0:00:00.470) 0:00:02.521 ****** 2026-01-03 00:49:33.938449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-03 00:49:33.938461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-03 00:49:33.938469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-03 00:49:33.938476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-03 00:49:33.938495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-03 00:49:33.938509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-03 00:49:33.938517 | orchestrator | 2026-01-03 00:49:33.938525 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-01-03 00:49:33.938532 | orchestrator | Saturday 03 January 2026 00:49:12 +0000 (0:00:01.433) 0:00:03.955 ****** 2026-01-03 00:49:33.938541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-03 00:49:33.938549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-03 00:49:33.938557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-03 00:49:33.938563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-03 00:49:33.938574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-03 00:49:33.938587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-03 00:49:33.938593 | orchestrator | 2026-01-03 00:49:33.938599 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-01-03 00:49:33.938605 | orchestrator | Saturday 03 January 2026 00:49:15 +0000 (0:00:03.222) 0:00:07.177 ****** 2026-01-03 00:49:33.938611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-03 00:49:33.938617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-03 00:49:33.938624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-03 00:49:33.938630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-03 00:49:33.938641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-03 00:49:33.938651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-03 00:49:33.938657 | orchestrator | 2026-01-03 00:49:33.938668 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-01-03 00:49:33.938676 | orchestrator | Saturday 03 January 2026 00:49:18 +0000 (0:00:02.991) 0:00:10.169 ****** 2026-01-03 00:49:33.938683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-03 00:49:33.938691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-03 00:49:33.938699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-03 00:49:33.938707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-03 00:49:33.938719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-03 00:49:33.938727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-03 00:49:33.938735 | orchestrator | 2026-01-03 00:49:33.938742 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-03 00:49:33.938750 | orchestrator | Saturday 03 January 2026 00:49:20 +0000 (0:00:01.877) 0:00:12.047 ****** 2026-01-03 00:49:33.938758 | orchestrator | 2026-01-03 00:49:33.938765 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-03 00:49:33.938776 | orchestrator | Saturday 03 January 2026 00:49:20 +0000 (0:00:00.181) 0:00:12.228 ****** 2026-01-03 00:49:33.938784 | orchestrator | 2026-01-03 00:49:33.938791 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-03 00:49:33.938799 | orchestrator | Saturday 03 January 2026 00:49:20 +0000 (0:00:00.109) 0:00:12.338 ****** 2026-01-03 00:49:33.938806 | orchestrator | 2026-01-03 00:49:33.938814 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-01-03 00:49:33.938822 | orchestrator | Saturday 03 January 2026 00:49:20 +0000 (0:00:00.074) 0:00:12.412 ****** 2026-01-03 00:49:33.938829 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:33.938837 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:33.938844 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:33.938852 | orchestrator | 2026-01-03 00:49:33.938859 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-01-03 00:49:33.938867 | orchestrator | Saturday 03 January 2026 00:49:28 +0000 (0:00:08.250) 0:00:20.662 ****** 2026-01-03 00:49:33.938875 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:49:33.938882 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:49:33.938890 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:49:33.938897 | orchestrator | 2026-01-03 00:49:33.938905 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:49:33.938913 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:49:33.938922 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:49:33.938934 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:49:33.938942 | orchestrator | 2026-01-03 00:49:33.938950 | orchestrator | 2026-01-03 00:49:33.938958 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:49:33.938970 | orchestrator | Saturday 03 January 2026 00:49:32 +0000 (0:00:04.100) 0:00:24.763 ****** 2026-01-03 00:49:33.938977 | orchestrator | =============================================================================== 2026-01-03 00:49:33.938984 | orchestrator | redis : Restart redis container ----------------------------------------- 8.25s 2026-01-03 00:49:33.938992 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 4.10s 2026-01-03 00:49:33.939000 | orchestrator | redis : Copying over default config.json files -------------------------- 3.22s 2026-01-03 00:49:33.939007 | orchestrator | redis : Copying over redis config files --------------------------------- 2.99s 2026-01-03 00:49:33.939014 | orchestrator | redis : Check redis containers ------------------------------------------ 1.88s 2026-01-03 00:49:33.939022 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.43s 2026-01-03 00:49:33.939029 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.97s 2026-01-03 00:49:33.939037 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.76s 2026-01-03 00:49:33.939044 | orchestrator | redis : include_tasks --------------------------------------------------- 0.47s 2026-01-03 00:49:33.939052 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.37s 2026-01-03 00:49:33.939060 | orchestrator | 2026-01-03 00:49:33 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:49:33.939067 | orchestrator | 2026-01-03 00:49:33 | INFO  | Task a268fd38-997b-47df-bdd6-135954f5eb54 is in state SUCCESS 2026-01-03 00:49:33.939074 | orchestrator | 2026-01-03 00:49:33 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:49:33.939081 | orchestrator | 2026-01-03 00:49:33 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:49:33.943351 | orchestrator | 2026-01-03 00:49:33 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state STARTED 2026-01-03 00:49:33.944214 | orchestrator | 2026-01-03 00:49:33 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:49:33.944376 | orchestrator | 2026-01-03 00:49:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:36.975730 | orchestrator | 2026-01-03 00:49:36 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:49:36.975828 | orchestrator | 2026-01-03 00:49:36 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:49:36.976159 | orchestrator | 2026-01-03 00:49:36 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:49:36.976874 | orchestrator | 2026-01-03 00:49:36 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state STARTED 2026-01-03 00:49:36.977305 | orchestrator | 2026-01-03 00:49:36 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:49:36.977328 | orchestrator | 2026-01-03 00:49:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:40.003781 | orchestrator | 2026-01-03 00:49:40 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:49:40.003886 | orchestrator | 2026-01-03 00:49:40 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:49:40.004661 | orchestrator | 2026-01-03 00:49:40 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:49:40.005428 | orchestrator | 2026-01-03 00:49:40 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state STARTED 2026-01-03 00:49:40.007507 | orchestrator | 2026-01-03 00:49:40 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:49:40.007802 | orchestrator | 2026-01-03 00:49:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:43.178193 | orchestrator | 2026-01-03 00:49:43 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:49:43.178355 | orchestrator | 2026-01-03 00:49:43 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:49:43.181901 | orchestrator | 2026-01-03 00:49:43 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:49:43.182422 | orchestrator | 2026-01-03 00:49:43 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state STARTED 2026-01-03 00:49:43.183223 | orchestrator | 2026-01-03 00:49:43 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:49:43.183236 | orchestrator | 2026-01-03 00:49:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:46.212209 | orchestrator | 2026-01-03 00:49:46 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:49:46.215313 | orchestrator | 2026-01-03 00:49:46 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:49:46.217894 | orchestrator | 2026-01-03 00:49:46 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:49:46.220128 | orchestrator | 2026-01-03 00:49:46 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state STARTED 2026-01-03 00:49:46.222351 | orchestrator | 2026-01-03 00:49:46 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:49:46.222431 | orchestrator | 2026-01-03 00:49:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:49.252900 | orchestrator | 2026-01-03 00:49:49 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:49:49.253450 | orchestrator | 2026-01-03 00:49:49 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:49:49.254201 | orchestrator | 2026-01-03 00:49:49 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:49:49.255008 | orchestrator | 2026-01-03 00:49:49 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state STARTED 2026-01-03 00:49:49.255377 | orchestrator | 2026-01-03 00:49:49 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:49:49.255474 | orchestrator | 2026-01-03 00:49:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:52.287027 | orchestrator | 2026-01-03 00:49:52 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:49:52.287579 | orchestrator | 2026-01-03 00:49:52 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:49:52.288472 | orchestrator | 2026-01-03 00:49:52 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:49:52.289511 | orchestrator | 2026-01-03 00:49:52 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state STARTED 2026-01-03 00:49:52.290530 | orchestrator | 2026-01-03 00:49:52 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:49:52.290561 | orchestrator | 2026-01-03 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:55.323190 | orchestrator | 2026-01-03 00:49:55 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:49:55.324481 | orchestrator | 2026-01-03 00:49:55 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:49:55.325725 | orchestrator | 2026-01-03 00:49:55 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:49:55.328574 | orchestrator | 2026-01-03 00:49:55 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state STARTED 2026-01-03 00:49:55.329835 | orchestrator | 2026-01-03 00:49:55 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:49:55.330044 | orchestrator | 2026-01-03 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:49:58.379551 | orchestrator | 2026-01-03 00:49:58 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:49:58.383940 | orchestrator | 2026-01-03 00:49:58 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:49:58.386450 | orchestrator | 2026-01-03 00:49:58 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:49:58.388556 | orchestrator | 2026-01-03 00:49:58 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state STARTED 2026-01-03 00:49:58.390663 | orchestrator | 2026-01-03 00:49:58 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:49:58.390698 | orchestrator | 2026-01-03 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:01.427569 | orchestrator | 2026-01-03 00:50:01 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:50:01.428815 | orchestrator | 2026-01-03 00:50:01 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:50:01.428873 | orchestrator | 2026-01-03 00:50:01 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:50:01.429910 | orchestrator | 2026-01-03 00:50:01 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state STARTED 2026-01-03 00:50:01.430813 | orchestrator | 2026-01-03 00:50:01 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:50:01.430833 | orchestrator | 2026-01-03 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:04.477090 | orchestrator | 2026-01-03 00:50:04 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:50:04.477463 | orchestrator | 2026-01-03 00:50:04 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:50:04.478492 | orchestrator | 2026-01-03 00:50:04 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:50:04.481702 | orchestrator | 2026-01-03 00:50:04 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state STARTED 2026-01-03 00:50:04.481793 | orchestrator | 2026-01-03 00:50:04 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:50:04.482006 | orchestrator | 2026-01-03 00:50:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:07.538578 | orchestrator | 2026-01-03 00:50:07 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:50:07.547077 | orchestrator | 2026-01-03 00:50:07 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:50:07.548519 | orchestrator | 2026-01-03 00:50:07 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:50:07.550150 | orchestrator | 2026-01-03 00:50:07 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state STARTED 2026-01-03 00:50:07.550895 | orchestrator | 2026-01-03 00:50:07 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:50:07.552081 | orchestrator | 2026-01-03 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:10.602089 | orchestrator | 2026-01-03 00:50:10 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:50:10.603566 | orchestrator | 2026-01-03 00:50:10 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:50:10.605218 | orchestrator | 2026-01-03 00:50:10 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:50:10.608827 | orchestrator | 2026-01-03 00:50:10 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state STARTED 2026-01-03 00:50:10.610543 | orchestrator | 2026-01-03 00:50:10 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:50:10.610605 | orchestrator | 2026-01-03 00:50:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:13.655540 | orchestrator | 2026-01-03 00:50:13 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:50:13.656468 | orchestrator | 2026-01-03 00:50:13 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:50:13.658975 | orchestrator | 2026-01-03 00:50:13 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:50:13.660098 | orchestrator | 2026-01-03 00:50:13 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state STARTED 2026-01-03 00:50:13.661021 | orchestrator | 2026-01-03 00:50:13 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:50:13.661052 | orchestrator | 2026-01-03 00:50:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:16.693610 | orchestrator | 2026-01-03 00:50:16 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:50:16.694142 | orchestrator | 2026-01-03 00:50:16 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:50:16.695103 | orchestrator | 2026-01-03 00:50:16 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:50:16.696442 | orchestrator | 2026-01-03 00:50:16 | INFO  | Task 631a732b-0385-4e64-9b68-47a260585a95 is in state SUCCESS 2026-01-03 00:50:16.697754 | orchestrator | 2026-01-03 00:50:16.697783 | orchestrator | 2026-01-03 00:50:16.697790 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:50:16.697797 | orchestrator | 2026-01-03 00:50:16.697803 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 00:50:16.697809 | orchestrator | Saturday 03 January 2026 00:49:07 +0000 (0:00:00.433) 0:00:00.433 ****** 2026-01-03 00:50:16.697815 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:50:16.697823 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:50:16.697829 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:50:16.697835 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:50:16.697841 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:50:16.697848 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:50:16.697854 | orchestrator | 2026-01-03 00:50:16.697861 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 00:50:16.697866 | orchestrator | Saturday 03 January 2026 00:49:08 +0000 (0:00:01.186) 0:00:01.620 ****** 2026-01-03 00:50:16.697870 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-03 00:50:16.697874 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-03 00:50:16.697878 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-03 00:50:16.697882 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-03 00:50:16.697886 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-03 00:50:16.697889 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-03 00:50:16.697893 | orchestrator | 2026-01-03 00:50:16.697897 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-01-03 00:50:16.697901 | orchestrator | 2026-01-03 00:50:16.697905 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-01-03 00:50:16.697908 | orchestrator | Saturday 03 January 2026 00:49:09 +0000 (0:00:01.356) 0:00:02.976 ****** 2026-01-03 00:50:16.697926 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:50:16.697931 | orchestrator | 2026-01-03 00:50:16.697935 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-03 00:50:16.697939 | orchestrator | Saturday 03 January 2026 00:49:11 +0000 (0:00:01.709) 0:00:04.686 ****** 2026-01-03 00:50:16.697943 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-03 00:50:16.697947 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-03 00:50:16.697951 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-03 00:50:16.697955 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-03 00:50:16.697958 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-03 00:50:16.697962 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-03 00:50:16.697966 | orchestrator | 2026-01-03 00:50:16.697970 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-03 00:50:16.697974 | orchestrator | Saturday 03 January 2026 00:49:13 +0000 (0:00:01.550) 0:00:06.236 ****** 2026-01-03 00:50:16.697977 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-03 00:50:16.697981 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-03 00:50:16.697985 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-03 00:50:16.697989 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-03 00:50:16.697992 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-03 00:50:16.697996 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-03 00:50:16.698000 | orchestrator | 2026-01-03 00:50:16.698004 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-03 00:50:16.698008 | orchestrator | Saturday 03 January 2026 00:49:14 +0000 (0:00:01.606) 0:00:07.843 ****** 2026-01-03 00:50:16.698011 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-01-03 00:50:16.698041 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:16.698048 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-01-03 00:50:16.698054 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:16.698060 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-01-03 00:50:16.698067 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:16.698073 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-01-03 00:50:16.698079 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:50:16.698086 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-01-03 00:50:16.698101 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:50:16.698108 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-01-03 00:50:16.698115 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:50:16.698122 | orchestrator | 2026-01-03 00:50:16.698129 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-01-03 00:50:16.698133 | orchestrator | Saturday 03 January 2026 00:49:16 +0000 (0:00:01.706) 0:00:09.549 ****** 2026-01-03 00:50:16.698137 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:16.698141 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:16.698144 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:16.698148 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:50:16.698152 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:50:16.698156 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:50:16.698160 | orchestrator | 2026-01-03 00:50:16.698164 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-01-03 00:50:16.698170 | orchestrator | Saturday 03 January 2026 00:49:17 +0000 (0:00:00.969) 0:00:10.518 ****** 2026-01-03 00:50:16.698193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698254 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698261 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698268 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698275 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698284 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698293 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698301 | orchestrator | 2026-01-03 00:50:16.698305 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-01-03 00:50:16.698309 | orchestrator | Saturday 03 January 2026 00:49:19 +0000 (0:00:02.090) 0:00:12.609 ****** 2026-01-03 00:50:16.698358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698379 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698404 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698418 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698422 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698429 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698440 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698445 | orchestrator | 2026-01-03 00:50:16.698449 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-01-03 00:50:16.698454 | orchestrator | Saturday 03 January 2026 00:49:24 +0000 (0:00:05.376) 0:00:17.986 ****** 2026-01-03 00:50:16.698458 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:16.698463 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:16.698467 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:16.698472 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:50:16.698476 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:50:16.698481 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:50:16.698485 | orchestrator | 2026-01-03 00:50:16.698490 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-01-03 00:50:16.698494 | orchestrator | Saturday 03 January 2026 00:49:26 +0000 (0:00:01.603) 0:00:19.589 ****** 2026-01-03 00:50:16.698499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698515 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698522 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698526 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698545 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698551 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698556 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-03 00:50:16.698560 | orchestrator | 2026-01-03 00:50:16.698564 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-03 00:50:16.698568 | orchestrator | Saturday 03 January 2026 00:49:28 +0000 (0:00:02.164) 0:00:21.754 ****** 2026-01-03 00:50:16.698571 | orchestrator | 2026-01-03 00:50:16.698575 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-03 00:50:16.698579 | orchestrator | Saturday 03 January 2026 00:49:29 +0000 (0:00:00.457) 0:00:22.211 ****** 2026-01-03 00:50:16.698583 | orchestrator | 2026-01-03 00:50:16.698587 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-03 00:50:16.698590 | orchestrator | Saturday 03 January 2026 00:49:29 +0000 (0:00:00.213) 0:00:22.424 ****** 2026-01-03 00:50:16.698594 | orchestrator | 2026-01-03 00:50:16.698598 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-03 00:50:16.698602 | orchestrator | Saturday 03 January 2026 00:49:29 +0000 (0:00:00.523) 0:00:22.948 ****** 2026-01-03 00:50:16.698605 | orchestrator | 2026-01-03 00:50:16.698611 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-03 00:50:16.698620 | orchestrator | Saturday 03 January 2026 00:49:30 +0000 (0:00:00.306) 0:00:23.254 ****** 2026-01-03 00:50:16.698627 | orchestrator | 2026-01-03 00:50:16.698633 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-03 00:50:16.698639 | orchestrator | Saturday 03 January 2026 00:49:30 +0000 (0:00:00.553) 0:00:23.808 ****** 2026-01-03 00:50:16.698653 | orchestrator | 2026-01-03 00:50:16.698660 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-01-03 00:50:16.698667 | orchestrator | Saturday 03 January 2026 00:49:30 +0000 (0:00:00.250) 0:00:24.058 ****** 2026-01-03 00:50:16.698674 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:50:16.698680 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:50:16.698687 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:50:16.698690 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:50:16.698694 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:50:16.698698 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:50:16.698702 | orchestrator | 2026-01-03 00:50:16.698706 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-01-03 00:50:16.698710 | orchestrator | Saturday 03 January 2026 00:49:40 +0000 (0:00:09.927) 0:00:33.985 ****** 2026-01-03 00:50:16.698714 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:50:16.698718 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:50:16.698722 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:50:16.698728 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:50:16.698734 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:50:16.698741 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:50:16.698747 | orchestrator | 2026-01-03 00:50:16.698753 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-03 00:50:16.698759 | orchestrator | Saturday 03 January 2026 00:49:42 +0000 (0:00:01.753) 0:00:35.738 ****** 2026-01-03 00:50:16.698764 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:50:16.698775 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:50:16.698784 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:50:16.698791 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:50:16.698797 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:50:16.698803 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:50:16.698809 | orchestrator | 2026-01-03 00:50:16.698813 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-01-03 00:50:16.698817 | orchestrator | Saturday 03 January 2026 00:49:52 +0000 (0:00:09.526) 0:00:45.264 ****** 2026-01-03 00:50:16.698821 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-01-03 00:50:16.698825 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-01-03 00:50:16.698829 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-01-03 00:50:16.698833 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-01-03 00:50:16.698837 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-01-03 00:50:16.698844 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-01-03 00:50:16.698848 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-01-03 00:50:16.698852 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-01-03 00:50:16.698855 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-01-03 00:50:16.698859 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-01-03 00:50:16.698863 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-01-03 00:50:16.698867 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-01-03 00:50:16.698870 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-03 00:50:16.698878 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-03 00:50:16.698882 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-03 00:50:16.698886 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-03 00:50:16.698889 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-03 00:50:16.698893 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-03 00:50:16.698897 | orchestrator | 2026-01-03 00:50:16.698901 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-01-03 00:50:16.698905 | orchestrator | Saturday 03 January 2026 00:49:58 +0000 (0:00:06.713) 0:00:51.978 ****** 2026-01-03 00:50:16.698908 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-01-03 00:50:16.698912 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:50:16.698916 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-01-03 00:50:16.698920 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:50:16.698924 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-01-03 00:50:16.698928 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:50:16.698932 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-01-03 00:50:16.698936 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-01-03 00:50:16.698939 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-01-03 00:50:16.698943 | orchestrator | 2026-01-03 00:50:16.698947 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-01-03 00:50:16.698951 | orchestrator | Saturday 03 January 2026 00:50:01 +0000 (0:00:02.406) 0:00:54.385 ****** 2026-01-03 00:50:16.698955 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-01-03 00:50:16.698958 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:50:16.698962 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-01-03 00:50:16.698966 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:50:16.698970 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-01-03 00:50:16.698974 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:50:16.698977 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-01-03 00:50:16.698981 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-01-03 00:50:16.698985 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-01-03 00:50:16.698989 | orchestrator | 2026-01-03 00:50:16.698992 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-03 00:50:16.698996 | orchestrator | Saturday 03 January 2026 00:50:05 +0000 (0:00:03.974) 0:00:58.359 ****** 2026-01-03 00:50:16.699000 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:50:16.699004 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:50:16.699008 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:50:16.699012 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:50:16.699015 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:50:16.699021 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:50:16.699025 | orchestrator | 2026-01-03 00:50:16.699029 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:50:16.699033 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-03 00:50:16.699037 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-03 00:50:16.699041 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-03 00:50:16.699048 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-03 00:50:16.699052 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-03 00:50:16.699059 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-03 00:50:16.699064 | orchestrator | 2026-01-03 00:50:16.699068 | orchestrator | 2026-01-03 00:50:16.699071 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:50:16.699075 | orchestrator | Saturday 03 January 2026 00:50:13 +0000 (0:00:08.575) 0:01:06.935 ****** 2026-01-03 00:50:16.699079 | orchestrator | =============================================================================== 2026-01-03 00:50:16.699083 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.10s 2026-01-03 00:50:16.699087 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.93s 2026-01-03 00:50:16.699091 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.71s 2026-01-03 00:50:16.699094 | orchestrator | openvswitch : Copying over config.json files for services --------------- 5.38s 2026-01-03 00:50:16.699098 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.97s 2026-01-03 00:50:16.699102 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.41s 2026-01-03 00:50:16.699106 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.30s 2026-01-03 00:50:16.699109 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.16s 2026-01-03 00:50:16.699113 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.09s 2026-01-03 00:50:16.699117 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.75s 2026-01-03 00:50:16.699121 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.71s 2026-01-03 00:50:16.699124 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.71s 2026-01-03 00:50:16.699128 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.61s 2026-01-03 00:50:16.699132 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.60s 2026-01-03 00:50:16.699136 | orchestrator | module-load : Load modules ---------------------------------------------- 1.55s 2026-01-03 00:50:16.699140 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.36s 2026-01-03 00:50:16.699143 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.19s 2026-01-03 00:50:16.699147 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.97s 2026-01-03 00:50:16.699151 | orchestrator | 2026-01-03 00:50:16 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:50:16.701108 | orchestrator | 2026-01-03 00:50:16 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:50:16.701147 | orchestrator | 2026-01-03 00:50:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:19.742054 | orchestrator | 2026-01-03 00:50:19 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:50:19.745272 | orchestrator | 2026-01-03 00:50:19 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:50:19.745386 | orchestrator | 2026-01-03 00:50:19 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:50:19.745392 | orchestrator | 2026-01-03 00:50:19 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:50:19.749376 | orchestrator | 2026-01-03 00:50:19 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:50:19.749471 | orchestrator | 2026-01-03 00:50:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:22.783057 | orchestrator | 2026-01-03 00:50:22 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:50:22.785584 | orchestrator | 2026-01-03 00:50:22 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:50:22.786269 | orchestrator | 2026-01-03 00:50:22 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:50:22.786998 | orchestrator | 2026-01-03 00:50:22 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:50:22.788662 | orchestrator | 2026-01-03 00:50:22 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:50:22.788712 | orchestrator | 2026-01-03 00:50:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:25.813574 | orchestrator | 2026-01-03 00:50:25 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:50:25.814679 | orchestrator | 2026-01-03 00:50:25 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:50:25.815551 | orchestrator | 2026-01-03 00:50:25 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:50:25.817632 | orchestrator | 2026-01-03 00:50:25 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:50:25.819529 | orchestrator | 2026-01-03 00:50:25 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:50:25.819567 | orchestrator | 2026-01-03 00:50:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:28.883750 | orchestrator | 2026-01-03 00:50:28 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:50:28.884057 | orchestrator | 2026-01-03 00:50:28 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:50:28.885824 | orchestrator | 2026-01-03 00:50:28 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:50:28.886550 | orchestrator | 2026-01-03 00:50:28 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:50:28.887555 | orchestrator | 2026-01-03 00:50:28 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:50:28.888018 | orchestrator | 2026-01-03 00:50:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:32.044866 | orchestrator | 2026-01-03 00:50:32 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:50:32.049994 | orchestrator | 2026-01-03 00:50:32 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:50:32.054249 | orchestrator | 2026-01-03 00:50:32 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:50:32.058123 | orchestrator | 2026-01-03 00:50:32 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:50:32.058204 | orchestrator | 2026-01-03 00:50:32 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:50:32.058303 | orchestrator | 2026-01-03 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:35.114174 | orchestrator | 2026-01-03 00:50:35 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:50:35.117714 | orchestrator | 2026-01-03 00:50:35 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:50:35.122926 | orchestrator | 2026-01-03 00:50:35 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:50:35.125995 | orchestrator | 2026-01-03 00:50:35 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:50:35.146286 | orchestrator | 2026-01-03 00:50:35 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:50:35.146342 | orchestrator | 2026-01-03 00:50:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:38.172806 | orchestrator | 2026-01-03 00:50:38 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:50:38.172855 | orchestrator | 2026-01-03 00:50:38 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:50:38.172861 | orchestrator | 2026-01-03 00:50:38 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:50:38.172865 | orchestrator | 2026-01-03 00:50:38 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:50:38.173068 | orchestrator | 2026-01-03 00:50:38 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:50:38.173084 | orchestrator | 2026-01-03 00:50:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:41.254137 | orchestrator | 2026-01-03 00:50:41 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:50:41.254221 | orchestrator | 2026-01-03 00:50:41 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:50:41.254687 | orchestrator | 2026-01-03 00:50:41 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:50:41.255445 | orchestrator | 2026-01-03 00:50:41 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:50:41.256807 | orchestrator | 2026-01-03 00:50:41 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:50:41.256854 | orchestrator | 2026-01-03 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:44.347591 | orchestrator | 2026-01-03 00:50:44 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:50:44.347665 | orchestrator | 2026-01-03 00:50:44 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:50:44.347671 | orchestrator | 2026-01-03 00:50:44 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:50:44.347675 | orchestrator | 2026-01-03 00:50:44 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:50:44.347680 | orchestrator | 2026-01-03 00:50:44 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:50:44.347685 | orchestrator | 2026-01-03 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:47.364415 | orchestrator | 2026-01-03 00:50:47 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:50:47.365634 | orchestrator | 2026-01-03 00:50:47 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:50:47.379333 | orchestrator | 2026-01-03 00:50:47 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:50:47.382183 | orchestrator | 2026-01-03 00:50:47 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:50:47.382663 | orchestrator | 2026-01-03 00:50:47 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state STARTED 2026-01-03 00:50:47.382684 | orchestrator | 2026-01-03 00:50:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:50.435506 | orchestrator | 2026-01-03 00:50:50 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:50:50.435694 | orchestrator | 2026-01-03 00:50:50 | INFO  | Task 86a947d5-d611-4d34-b2d6-c7c9b1c01342 is in state STARTED 2026-01-03 00:50:50.436696 | orchestrator | 2026-01-03 00:50:50 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:50:50.437482 | orchestrator | 2026-01-03 00:50:50 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:50:50.438098 | orchestrator | 2026-01-03 00:50:50 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:50:50.439020 | orchestrator | 2026-01-03 00:50:50 | INFO  | Task 42b6a883-1978-457f-83d4-9712423b5161 is in state SUCCESS 2026-01-03 00:50:50.440720 | orchestrator | 2026-01-03 00:50:50.442156 | orchestrator | 2026-01-03 00:50:50.442197 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-01-03 00:50:50.442203 | orchestrator | 2026-01-03 00:50:50.442208 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-01-03 00:50:50.442213 | orchestrator | Saturday 03 January 2026 00:46:21 +0000 (0:00:00.172) 0:00:00.172 ****** 2026-01-03 00:50:50.442218 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:50:50.442223 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:50:50.442227 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:50:50.442232 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:50:50.442235 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:50:50.442239 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:50:50.442243 | orchestrator | 2026-01-03 00:50:50.442273 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-01-03 00:50:50.442278 | orchestrator | Saturday 03 January 2026 00:46:21 +0000 (0:00:00.650) 0:00:00.823 ****** 2026-01-03 00:50:50.442282 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:50:50.442287 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:50:50.442291 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:50:50.442295 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.442299 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.442303 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.442307 | orchestrator | 2026-01-03 00:50:50.442312 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-01-03 00:50:50.442316 | orchestrator | Saturday 03 January 2026 00:46:22 +0000 (0:00:00.599) 0:00:01.422 ****** 2026-01-03 00:50:50.442322 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:50:50.442328 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:50:50.442334 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:50:50.442341 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.442347 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.442353 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.442359 | orchestrator | 2026-01-03 00:50:50.442365 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-01-03 00:50:50.442371 | orchestrator | Saturday 03 January 2026 00:46:22 +0000 (0:00:00.640) 0:00:02.063 ****** 2026-01-03 00:50:50.442378 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:50:50.442383 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:50:50.442388 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:50:50.442414 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:50:50.442421 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:50:50.442428 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:50:50.442433 | orchestrator | 2026-01-03 00:50:50.442440 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-01-03 00:50:50.442446 | orchestrator | Saturday 03 January 2026 00:46:25 +0000 (0:00:02.034) 0:00:04.097 ****** 2026-01-03 00:50:50.442452 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:50:50.442459 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:50:50.442465 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:50:50.442471 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:50:50.442519 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:50:50.442523 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:50:50.442543 | orchestrator | 2026-01-03 00:50:50.442547 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-01-03 00:50:50.442551 | orchestrator | Saturday 03 January 2026 00:46:26 +0000 (0:00:01.733) 0:00:05.830 ****** 2026-01-03 00:50:50.442555 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:50:50.442559 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:50:50.442562 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:50:50.442566 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:50:50.442570 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:50:50.442574 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:50:50.442577 | orchestrator | 2026-01-03 00:50:50.442581 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-01-03 00:50:50.442585 | orchestrator | Saturday 03 January 2026 00:46:27 +0000 (0:00:00.966) 0:00:06.797 ****** 2026-01-03 00:50:50.442589 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:50:50.442592 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:50:50.442596 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:50:50.442600 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.442604 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.442608 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.442611 | orchestrator | 2026-01-03 00:50:50.442615 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-01-03 00:50:50.442619 | orchestrator | Saturday 03 January 2026 00:46:28 +0000 (0:00:00.676) 0:00:07.474 ****** 2026-01-03 00:50:50.442623 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:50:50.442627 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:50:50.442631 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:50:50.442635 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.442641 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.442648 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.442654 | orchestrator | 2026-01-03 00:50:50.442661 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-01-03 00:50:50.442668 | orchestrator | Saturday 03 January 2026 00:46:28 +0000 (0:00:00.515) 0:00:07.989 ****** 2026-01-03 00:50:50.442675 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-03 00:50:50.442682 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-03 00:50:50.442688 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:50:50.442695 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-03 00:50:50.442702 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-03 00:50:50.442706 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:50:50.442710 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-03 00:50:50.442714 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-03 00:50:50.442718 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:50:50.442722 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-03 00:50:50.442737 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-03 00:50:50.442741 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.442746 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-03 00:50:50.442750 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-03 00:50:50.442754 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.442757 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-03 00:50:50.442761 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-03 00:50:50.442765 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.442769 | orchestrator | 2026-01-03 00:50:50.442773 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-01-03 00:50:50.442782 | orchestrator | Saturday 03 January 2026 00:46:29 +0000 (0:00:00.638) 0:00:08.628 ****** 2026-01-03 00:50:50.442786 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:50:50.442789 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:50:50.442793 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:50:50.442797 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.442801 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.442805 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.442808 | orchestrator | 2026-01-03 00:50:50.442812 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-01-03 00:50:50.442817 | orchestrator | Saturday 03 January 2026 00:46:31 +0000 (0:00:01.621) 0:00:10.249 ****** 2026-01-03 00:50:50.442821 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:50:50.442826 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:50:50.442830 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:50:50.442833 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:50:50.442837 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:50:50.442841 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:50:50.442845 | orchestrator | 2026-01-03 00:50:50.442849 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-01-03 00:50:50.442853 | orchestrator | Saturday 03 January 2026 00:46:32 +0000 (0:00:00.934) 0:00:11.183 ****** 2026-01-03 00:50:50.442856 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:50:50.442860 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:50:50.442864 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:50:50.442873 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:50:50.442877 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:50:50.442880 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:50:50.442884 | orchestrator | 2026-01-03 00:50:50.442888 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-01-03 00:50:50.442892 | orchestrator | Saturday 03 January 2026 00:46:36 +0000 (0:00:04.852) 0:00:16.036 ****** 2026-01-03 00:50:50.442896 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:50:50.442900 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:50:50.442903 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:50:50.442907 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.442911 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.442915 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.442919 | orchestrator | 2026-01-03 00:50:50.442923 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-01-03 00:50:50.442926 | orchestrator | Saturday 03 January 2026 00:46:37 +0000 (0:00:00.966) 0:00:17.002 ****** 2026-01-03 00:50:50.442930 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:50:50.442934 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:50:50.442938 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:50:50.442942 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.442945 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.442949 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.442953 | orchestrator | 2026-01-03 00:50:50.442957 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-01-03 00:50:50.442962 | orchestrator | Saturday 03 January 2026 00:46:39 +0000 (0:00:01.778) 0:00:18.780 ****** 2026-01-03 00:50:50.442966 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:50:50.442970 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:50:50.442974 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:50:50.442977 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.442981 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.442995 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.442999 | orchestrator | 2026-01-03 00:50:50.443008 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-01-03 00:50:50.443012 | orchestrator | Saturday 03 January 2026 00:46:40 +0000 (0:00:00.523) 0:00:19.303 ****** 2026-01-03 00:50:50.443016 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-01-03 00:50:50.443024 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-01-03 00:50:50.443028 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:50:50.443032 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-01-03 00:50:50.443035 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-01-03 00:50:50.443039 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:50:50.443043 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-01-03 00:50:50.443047 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-01-03 00:50:50.443051 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:50:50.443054 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-01-03 00:50:50.443058 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-01-03 00:50:50.443062 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.443066 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-01-03 00:50:50.443070 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-01-03 00:50:50.443073 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.443077 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-01-03 00:50:50.443081 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-01-03 00:50:50.443085 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.443088 | orchestrator | 2026-01-03 00:50:50.443092 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-01-03 00:50:50.443100 | orchestrator | Saturday 03 January 2026 00:46:41 +0000 (0:00:01.543) 0:00:20.847 ****** 2026-01-03 00:50:50.443104 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:50:50.443117 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:50:50.443121 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:50:50.443124 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.443128 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.443132 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.443136 | orchestrator | 2026-01-03 00:50:50.443139 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-01-03 00:50:50.443143 | orchestrator | Saturday 03 January 2026 00:46:43 +0000 (0:00:01.323) 0:00:22.171 ****** 2026-01-03 00:50:50.443147 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:50:50.443151 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:50:50.443155 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:50:50.443158 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.443162 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.443166 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.443170 | orchestrator | 2026-01-03 00:50:50.443173 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-01-03 00:50:50.443177 | orchestrator | 2026-01-03 00:50:50.443181 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-01-03 00:50:50.443185 | orchestrator | Saturday 03 January 2026 00:46:44 +0000 (0:00:01.271) 0:00:23.445 ****** 2026-01-03 00:50:50.443188 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:50:50.443192 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:50:50.443196 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:50:50.443200 | orchestrator | 2026-01-03 00:50:50.443204 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-01-03 00:50:50.443207 | orchestrator | Saturday 03 January 2026 00:46:46 +0000 (0:00:02.448) 0:00:25.893 ****** 2026-01-03 00:50:50.443211 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:50:50.443215 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:50:50.443219 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:50:50.443223 | orchestrator | 2026-01-03 00:50:50.443226 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-01-03 00:50:50.443230 | orchestrator | Saturday 03 January 2026 00:46:48 +0000 (0:00:01.398) 0:00:27.292 ****** 2026-01-03 00:50:50.443234 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:50:50.443241 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:50:50.443266 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:50:50.443271 | orchestrator | 2026-01-03 00:50:50.443275 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-01-03 00:50:50.443279 | orchestrator | Saturday 03 January 2026 00:46:49 +0000 (0:00:01.057) 0:00:28.350 ****** 2026-01-03 00:50:50.443283 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:50:50.443286 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:50:50.443290 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:50:50.443294 | orchestrator | 2026-01-03 00:50:50.443298 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-01-03 00:50:50.443302 | orchestrator | Saturday 03 January 2026 00:46:50 +0000 (0:00:00.984) 0:00:29.335 ****** 2026-01-03 00:50:50.443305 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.443309 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.443313 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.443317 | orchestrator | 2026-01-03 00:50:50.443320 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-01-03 00:50:50.443324 | orchestrator | Saturday 03 January 2026 00:46:50 +0000 (0:00:00.377) 0:00:29.713 ****** 2026-01-03 00:50:50.443328 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:50:50.443332 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:50:50.443335 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:50:50.443339 | orchestrator | 2026-01-03 00:50:50.443343 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-01-03 00:50:50.443347 | orchestrator | Saturday 03 January 2026 00:46:51 +0000 (0:00:01.230) 0:00:30.943 ****** 2026-01-03 00:50:50.443352 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:50:50.443358 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:50:50.443364 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:50:50.443373 | orchestrator | 2026-01-03 00:50:50.443381 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-01-03 00:50:50.443389 | orchestrator | Saturday 03 January 2026 00:46:53 +0000 (0:00:01.327) 0:00:32.271 ****** 2026-01-03 00:50:50.443395 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:50:50.443400 | orchestrator | 2026-01-03 00:50:50.443406 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-01-03 00:50:50.443415 | orchestrator | Saturday 03 January 2026 00:46:53 +0000 (0:00:00.584) 0:00:32.855 ****** 2026-01-03 00:50:50.443420 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:50:50.443425 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:50:50.443431 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:50:50.443438 | orchestrator | 2026-01-03 00:50:50.443444 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-01-03 00:50:50.443450 | orchestrator | Saturday 03 January 2026 00:46:55 +0000 (0:00:02.005) 0:00:34.860 ****** 2026-01-03 00:50:50.443456 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.443461 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.443467 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:50:50.443473 | orchestrator | 2026-01-03 00:50:50.443478 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-01-03 00:50:50.443484 | orchestrator | Saturday 03 January 2026 00:46:56 +0000 (0:00:00.627) 0:00:35.488 ****** 2026-01-03 00:50:50.443491 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.443496 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.443501 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:50:50.443507 | orchestrator | 2026-01-03 00:50:50.443513 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-01-03 00:50:50.443520 | orchestrator | Saturday 03 January 2026 00:46:57 +0000 (0:00:01.063) 0:00:36.552 ****** 2026-01-03 00:50:50.443569 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.443575 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.443581 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:50:50.443587 | orchestrator | 2026-01-03 00:50:50.443601 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-01-03 00:50:50.443612 | orchestrator | Saturday 03 January 2026 00:46:58 +0000 (0:00:01.319) 0:00:37.871 ****** 2026-01-03 00:50:50.443619 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.443625 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.443631 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.443638 | orchestrator | 2026-01-03 00:50:50.443643 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-01-03 00:50:50.443647 | orchestrator | Saturday 03 January 2026 00:46:59 +0000 (0:00:00.559) 0:00:38.430 ****** 2026-01-03 00:50:50.443651 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.443655 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.443658 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.443662 | orchestrator | 2026-01-03 00:50:50.443666 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-01-03 00:50:50.443670 | orchestrator | Saturday 03 January 2026 00:46:59 +0000 (0:00:00.388) 0:00:38.818 ****** 2026-01-03 00:50:50.443674 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:50:50.443678 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:50:50.443682 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:50:50.443685 | orchestrator | 2026-01-03 00:50:50.443689 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-01-03 00:50:50.443693 | orchestrator | Saturday 03 January 2026 00:47:01 +0000 (0:00:01.556) 0:00:40.375 ****** 2026-01-03 00:50:50.443697 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:50:50.443701 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:50:50.443705 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:50:50.443709 | orchestrator | 2026-01-03 00:50:50.443712 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-01-03 00:50:50.443716 | orchestrator | Saturday 03 January 2026 00:47:03 +0000 (0:00:02.613) 0:00:42.988 ****** 2026-01-03 00:50:50.443720 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:50:50.443724 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:50:50.443728 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:50:50.443731 | orchestrator | 2026-01-03 00:50:50.443735 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-01-03 00:50:50.443739 | orchestrator | Saturday 03 January 2026 00:47:04 +0000 (0:00:01.020) 0:00:44.009 ****** 2026-01-03 00:50:50.443747 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-03 00:50:50.443752 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-03 00:50:50.443756 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-03 00:50:50.443760 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-03 00:50:50.443764 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-03 00:50:50.443768 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-03 00:50:50.443772 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-03 00:50:50.443775 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-03 00:50:50.443779 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-03 00:50:50.443783 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-03 00:50:50.443791 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-03 00:50:50.443795 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-03 00:50:50.443805 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-01-03 00:50:50.443809 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-01-03 00:50:50.443813 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-01-03 00:50:50.443817 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:50:50.443821 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:50:50.443825 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:50:50.443829 | orchestrator | 2026-01-03 00:50:50.443833 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-01-03 00:50:50.443837 | orchestrator | Saturday 03 January 2026 00:47:58 +0000 (0:00:54.036) 0:01:38.048 ****** 2026-01-03 00:50:50.443841 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.443845 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.443849 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.443852 | orchestrator | 2026-01-03 00:50:50.443856 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-01-03 00:50:50.443864 | orchestrator | Saturday 03 January 2026 00:47:59 +0000 (0:00:00.472) 0:01:38.521 ****** 2026-01-03 00:50:50.443868 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:50:50.443872 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:50:50.443876 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:50:50.443880 | orchestrator | 2026-01-03 00:50:50.443884 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-01-03 00:50:50.443887 | orchestrator | Saturday 03 January 2026 00:48:00 +0000 (0:00:01.391) 0:01:39.912 ****** 2026-01-03 00:50:50.443892 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:50:50.443899 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:50:50.443905 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:50:50.443911 | orchestrator | 2026-01-03 00:50:50.443917 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-01-03 00:50:50.443923 | orchestrator | Saturday 03 January 2026 00:48:02 +0000 (0:00:01.826) 0:01:41.739 ****** 2026-01-03 00:50:50.443930 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:50:50.443937 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:50:50.443943 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:50:50.443950 | orchestrator | 2026-01-03 00:50:50.443956 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-01-03 00:50:50.443963 | orchestrator | Saturday 03 January 2026 00:48:27 +0000 (0:00:25.336) 0:02:07.075 ****** 2026-01-03 00:50:50.443970 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:50:50.443976 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:50:50.443982 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:50:50.443989 | orchestrator | 2026-01-03 00:50:50.443995 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-01-03 00:50:50.444001 | orchestrator | Saturday 03 January 2026 00:48:28 +0000 (0:00:00.690) 0:02:07.765 ****** 2026-01-03 00:50:50.444007 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:50:50.444019 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:50:50.444026 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:50:50.444068 | orchestrator | 2026-01-03 00:50:50.444076 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-01-03 00:50:50.444083 | orchestrator | Saturday 03 January 2026 00:48:29 +0000 (0:00:00.685) 0:02:08.450 ****** 2026-01-03 00:50:50.444088 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:50:50.444101 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:50:50.444107 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:50:50.444116 | orchestrator | 2026-01-03 00:50:50.444125 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-01-03 00:50:50.444131 | orchestrator | Saturday 03 January 2026 00:48:30 +0000 (0:00:00.637) 0:02:09.088 ****** 2026-01-03 00:50:50.444136 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:50:50.444142 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:50:50.444147 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:50:50.444153 | orchestrator | 2026-01-03 00:50:50.444158 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-01-03 00:50:50.444163 | orchestrator | Saturday 03 January 2026 00:48:30 +0000 (0:00:00.956) 0:02:10.045 ****** 2026-01-03 00:50:50.444169 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:50:50.444174 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:50:50.444180 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:50:50.444185 | orchestrator | 2026-01-03 00:50:50.444190 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-01-03 00:50:50.444196 | orchestrator | Saturday 03 January 2026 00:48:31 +0000 (0:00:00.284) 0:02:10.329 ****** 2026-01-03 00:50:50.444201 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:50:50.444207 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:50:50.444212 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:50:50.444217 | orchestrator | 2026-01-03 00:50:50.444223 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-01-03 00:50:50.444228 | orchestrator | Saturday 03 January 2026 00:48:31 +0000 (0:00:00.668) 0:02:10.998 ****** 2026-01-03 00:50:50.444233 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:50:50.444239 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:50:50.444244 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:50:50.444297 | orchestrator | 2026-01-03 00:50:50.444305 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-01-03 00:50:50.444311 | orchestrator | Saturday 03 January 2026 00:48:32 +0000 (0:00:00.655) 0:02:11.653 ****** 2026-01-03 00:50:50.444317 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:50:50.444324 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:50:50.444330 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:50:50.444338 | orchestrator | 2026-01-03 00:50:50.444344 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-01-03 00:50:50.444356 | orchestrator | Saturday 03 January 2026 00:48:33 +0000 (0:00:01.075) 0:02:12.729 ****** 2026-01-03 00:50:50.444360 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:50:50.444365 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:50:50.444369 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:50:50.444373 | orchestrator | 2026-01-03 00:50:50.444377 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-01-03 00:50:50.444381 | orchestrator | Saturday 03 January 2026 00:48:34 +0000 (0:00:00.749) 0:02:13.479 ****** 2026-01-03 00:50:50.444385 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.444388 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.444392 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.444396 | orchestrator | 2026-01-03 00:50:50.444400 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-01-03 00:50:50.444408 | orchestrator | Saturday 03 January 2026 00:48:34 +0000 (0:00:00.302) 0:02:13.781 ****** 2026-01-03 00:50:50.444414 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.444421 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.444431 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.444437 | orchestrator | 2026-01-03 00:50:50.444444 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-01-03 00:50:50.444451 | orchestrator | Saturday 03 January 2026 00:48:34 +0000 (0:00:00.294) 0:02:14.076 ****** 2026-01-03 00:50:50.444457 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:50:50.444463 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:50:50.444476 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:50:50.444482 | orchestrator | 2026-01-03 00:50:50.444489 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-01-03 00:50:50.444497 | orchestrator | Saturday 03 January 2026 00:48:35 +0000 (0:00:00.948) 0:02:15.025 ****** 2026-01-03 00:50:50.444505 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:50:50.444520 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:50:50.444527 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:50:50.444532 | orchestrator | 2026-01-03 00:50:50.444536 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-01-03 00:50:50.444541 | orchestrator | Saturday 03 January 2026 00:48:36 +0000 (0:00:00.574) 0:02:15.600 ****** 2026-01-03 00:50:50.444545 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-03 00:50:50.444550 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-03 00:50:50.444554 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-03 00:50:50.444557 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-03 00:50:50.444561 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-03 00:50:50.444565 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-03 00:50:50.444569 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-03 00:50:50.444574 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-03 00:50:50.444578 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-03 00:50:50.444582 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-01-03 00:50:50.444586 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-03 00:50:50.444590 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-03 00:50:50.444610 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-01-03 00:50:50.444615 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-03 00:50:50.444619 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-03 00:50:50.444629 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-03 00:50:50.444633 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-03 00:50:50.444637 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-03 00:50:50.444641 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-03 00:50:50.444645 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-03 00:50:50.444649 | orchestrator | 2026-01-03 00:50:50.444653 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-01-03 00:50:50.444657 | orchestrator | 2026-01-03 00:50:50.444662 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-01-03 00:50:50.444668 | orchestrator | Saturday 03 January 2026 00:48:39 +0000 (0:00:03.107) 0:02:18.708 ****** 2026-01-03 00:50:50.444675 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:50:50.444681 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:50:50.444688 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:50:50.444731 | orchestrator | 2026-01-03 00:50:50.444742 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-01-03 00:50:50.444762 | orchestrator | Saturday 03 January 2026 00:48:40 +0000 (0:00:00.621) 0:02:19.330 ****** 2026-01-03 00:50:50.444769 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:50:50.444776 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:50:50.444783 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:50:50.444789 | orchestrator | 2026-01-03 00:50:50.444796 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-01-03 00:50:50.444811 | orchestrator | Saturday 03 January 2026 00:48:40 +0000 (0:00:00.630) 0:02:19.960 ****** 2026-01-03 00:50:50.444817 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:50:50.444821 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:50:50.444825 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:50:50.444829 | orchestrator | 2026-01-03 00:50:50.444834 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-01-03 00:50:50.444838 | orchestrator | Saturday 03 January 2026 00:48:41 +0000 (0:00:00.300) 0:02:20.261 ****** 2026-01-03 00:50:50.444842 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:50:50.444846 | orchestrator | 2026-01-03 00:50:50.444850 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-01-03 00:50:50.444856 | orchestrator | Saturday 03 January 2026 00:48:41 +0000 (0:00:00.730) 0:02:20.991 ****** 2026-01-03 00:50:50.444862 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:50:50.444870 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:50:50.444876 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:50:50.444882 | orchestrator | 2026-01-03 00:50:50.444888 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-01-03 00:50:50.444895 | orchestrator | Saturday 03 January 2026 00:48:42 +0000 (0:00:00.359) 0:02:21.350 ****** 2026-01-03 00:50:50.444900 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:50:50.444906 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:50:50.444912 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:50:50.444918 | orchestrator | 2026-01-03 00:50:50.444923 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-01-03 00:50:50.444938 | orchestrator | Saturday 03 January 2026 00:48:42 +0000 (0:00:00.357) 0:02:21.708 ****** 2026-01-03 00:50:50.444944 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:50:50.444950 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:50:50.444955 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:50:50.444961 | orchestrator | 2026-01-03 00:50:50.444966 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-01-03 00:50:50.444972 | orchestrator | Saturday 03 January 2026 00:48:42 +0000 (0:00:00.355) 0:02:22.063 ****** 2026-01-03 00:50:50.444978 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:50:50.444984 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:50:50.444990 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:50:50.444996 | orchestrator | 2026-01-03 00:50:50.445003 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-01-03 00:50:50.445009 | orchestrator | Saturday 03 January 2026 00:48:43 +0000 (0:00:00.751) 0:02:22.814 ****** 2026-01-03 00:50:50.445016 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:50:50.445022 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:50:50.445028 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:50:50.445035 | orchestrator | 2026-01-03 00:50:50.445041 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-01-03 00:50:50.445048 | orchestrator | Saturday 03 January 2026 00:48:44 +0000 (0:00:00.965) 0:02:23.780 ****** 2026-01-03 00:50:50.445055 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:50:50.445061 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:50:50.445068 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:50:50.445074 | orchestrator | 2026-01-03 00:50:50.445081 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-01-03 00:50:50.445088 | orchestrator | Saturday 03 January 2026 00:48:45 +0000 (0:00:01.158) 0:02:24.939 ****** 2026-01-03 00:50:50.445094 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:50:50.445107 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:50:50.445112 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:50:50.445116 | orchestrator | 2026-01-03 00:50:50.445120 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-03 00:50:50.445124 | orchestrator | 2026-01-03 00:50:50.445128 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-03 00:50:50.445132 | orchestrator | Saturday 03 January 2026 00:48:56 +0000 (0:00:10.687) 0:02:35.626 ****** 2026-01-03 00:50:50.445136 | orchestrator | ok: [testbed-manager] 2026-01-03 00:50:50.445140 | orchestrator | 2026-01-03 00:50:50.445148 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-03 00:50:50.445153 | orchestrator | Saturday 03 January 2026 00:48:57 +0000 (0:00:00.822) 0:02:36.449 ****** 2026-01-03 00:50:50.445157 | orchestrator | changed: [testbed-manager] 2026-01-03 00:50:50.445161 | orchestrator | 2026-01-03 00:50:50.445165 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-03 00:50:50.445169 | orchestrator | Saturday 03 January 2026 00:48:57 +0000 (0:00:00.428) 0:02:36.878 ****** 2026-01-03 00:50:50.445173 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-03 00:50:50.445177 | orchestrator | 2026-01-03 00:50:50.445181 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-03 00:50:50.445185 | orchestrator | Saturday 03 January 2026 00:48:58 +0000 (0:00:00.574) 0:02:37.452 ****** 2026-01-03 00:50:50.445189 | orchestrator | changed: [testbed-manager] 2026-01-03 00:50:50.445193 | orchestrator | 2026-01-03 00:50:50.445197 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-03 00:50:50.445201 | orchestrator | Saturday 03 January 2026 00:48:59 +0000 (0:00:00.906) 0:02:38.359 ****** 2026-01-03 00:50:50.445205 | orchestrator | changed: [testbed-manager] 2026-01-03 00:50:50.445209 | orchestrator | 2026-01-03 00:50:50.445213 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-03 00:50:50.445217 | orchestrator | Saturday 03 January 2026 00:48:59 +0000 (0:00:00.571) 0:02:38.931 ****** 2026-01-03 00:50:50.445220 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-03 00:50:50.445225 | orchestrator | 2026-01-03 00:50:50.445228 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-03 00:50:50.445232 | orchestrator | Saturday 03 January 2026 00:49:01 +0000 (0:00:01.546) 0:02:40.477 ****** 2026-01-03 00:50:50.445236 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-03 00:50:50.445240 | orchestrator | 2026-01-03 00:50:50.445244 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-03 00:50:50.445267 | orchestrator | Saturday 03 January 2026 00:49:02 +0000 (0:00:00.794) 0:02:41.271 ****** 2026-01-03 00:50:50.445272 | orchestrator | changed: [testbed-manager] 2026-01-03 00:50:50.445277 | orchestrator | 2026-01-03 00:50:50.445280 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-03 00:50:50.445285 | orchestrator | Saturday 03 January 2026 00:49:02 +0000 (0:00:00.532) 0:02:41.804 ****** 2026-01-03 00:50:50.445289 | orchestrator | changed: [testbed-manager] 2026-01-03 00:50:50.445293 | orchestrator | 2026-01-03 00:50:50.445297 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-01-03 00:50:50.445301 | orchestrator | 2026-01-03 00:50:50.445305 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-01-03 00:50:50.445309 | orchestrator | Saturday 03 January 2026 00:49:03 +0000 (0:00:00.475) 0:02:42.279 ****** 2026-01-03 00:50:50.445313 | orchestrator | ok: [testbed-manager] 2026-01-03 00:50:50.445318 | orchestrator | 2026-01-03 00:50:50.445322 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-01-03 00:50:50.445326 | orchestrator | Saturday 03 January 2026 00:49:03 +0000 (0:00:00.164) 0:02:42.444 ****** 2026-01-03 00:50:50.445330 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-01-03 00:50:50.445334 | orchestrator | 2026-01-03 00:50:50.445338 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-01-03 00:50:50.445348 | orchestrator | Saturday 03 January 2026 00:49:03 +0000 (0:00:00.220) 0:02:42.664 ****** 2026-01-03 00:50:50.445352 | orchestrator | ok: [testbed-manager] 2026-01-03 00:50:50.445355 | orchestrator | 2026-01-03 00:50:50.445359 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-01-03 00:50:50.445363 | orchestrator | Saturday 03 January 2026 00:49:04 +0000 (0:00:00.699) 0:02:43.364 ****** 2026-01-03 00:50:50.445373 | orchestrator | ok: [testbed-manager] 2026-01-03 00:50:50.445377 | orchestrator | 2026-01-03 00:50:50.445415 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-01-03 00:50:50.445444 | orchestrator | Saturday 03 January 2026 00:49:05 +0000 (0:00:01.507) 0:02:44.871 ****** 2026-01-03 00:50:50.445452 | orchestrator | changed: [testbed-manager] 2026-01-03 00:50:50.445458 | orchestrator | 2026-01-03 00:50:50.445466 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-01-03 00:50:50.445473 | orchestrator | Saturday 03 January 2026 00:49:06 +0000 (0:00:00.738) 0:02:45.610 ****** 2026-01-03 00:50:50.445479 | orchestrator | ok: [testbed-manager] 2026-01-03 00:50:50.445485 | orchestrator | 2026-01-03 00:50:50.445492 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-01-03 00:50:50.445497 | orchestrator | Saturday 03 January 2026 00:49:06 +0000 (0:00:00.375) 0:02:45.986 ****** 2026-01-03 00:50:50.445501 | orchestrator | changed: [testbed-manager] 2026-01-03 00:50:50.445505 | orchestrator | 2026-01-03 00:50:50.445509 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-01-03 00:50:50.445513 | orchestrator | Saturday 03 January 2026 00:49:14 +0000 (0:00:07.378) 0:02:53.364 ****** 2026-01-03 00:50:50.445517 | orchestrator | changed: [testbed-manager] 2026-01-03 00:50:50.445521 | orchestrator | 2026-01-03 00:50:50.445525 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-01-03 00:50:50.445529 | orchestrator | Saturday 03 January 2026 00:49:30 +0000 (0:00:16.446) 0:03:09.811 ****** 2026-01-03 00:50:50.445533 | orchestrator | ok: [testbed-manager] 2026-01-03 00:50:50.445537 | orchestrator | 2026-01-03 00:50:50.445541 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-01-03 00:50:50.445545 | orchestrator | 2026-01-03 00:50:50.445549 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-01-03 00:50:50.445553 | orchestrator | Saturday 03 January 2026 00:49:31 +0000 (0:00:00.886) 0:03:10.697 ****** 2026-01-03 00:50:50.445557 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:50:50.445562 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:50:50.445565 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:50:50.445569 | orchestrator | 2026-01-03 00:50:50.445573 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-01-03 00:50:50.445578 | orchestrator | Saturday 03 January 2026 00:49:31 +0000 (0:00:00.376) 0:03:11.074 ****** 2026-01-03 00:50:50.445582 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.445586 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.445589 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.445593 | orchestrator | 2026-01-03 00:50:50.445597 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-01-03 00:50:50.445601 | orchestrator | Saturday 03 January 2026 00:49:32 +0000 (0:00:00.355) 0:03:11.429 ****** 2026-01-03 00:50:50.445606 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:50:50.445610 | orchestrator | 2026-01-03 00:50:50.445614 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-01-03 00:50:50.445617 | orchestrator | Saturday 03 January 2026 00:49:33 +0000 (0:00:01.131) 0:03:12.561 ****** 2026-01-03 00:50:50.445621 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-03 00:50:50.445625 | orchestrator | 2026-01-03 00:50:50.445629 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-01-03 00:50:50.445634 | orchestrator | Saturday 03 January 2026 00:49:34 +0000 (0:00:00.910) 0:03:13.472 ****** 2026-01-03 00:50:50.445644 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-03 00:50:50.445649 | orchestrator | 2026-01-03 00:50:50.445654 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-01-03 00:50:50.445658 | orchestrator | Saturday 03 January 2026 00:49:35 +0000 (0:00:00.750) 0:03:14.222 ****** 2026-01-03 00:50:50.445661 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.445665 | orchestrator | 2026-01-03 00:50:50.445669 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-01-03 00:50:50.445673 | orchestrator | Saturday 03 January 2026 00:49:35 +0000 (0:00:00.128) 0:03:14.350 ****** 2026-01-03 00:50:50.445677 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-03 00:50:50.445681 | orchestrator | 2026-01-03 00:50:50.445685 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-01-03 00:50:50.445689 | orchestrator | Saturday 03 January 2026 00:49:36 +0000 (0:00:00.881) 0:03:15.232 ****** 2026-01-03 00:50:50.445693 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.445697 | orchestrator | 2026-01-03 00:50:50.445700 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-01-03 00:50:50.445704 | orchestrator | Saturday 03 January 2026 00:49:36 +0000 (0:00:00.101) 0:03:15.334 ****** 2026-01-03 00:50:50.445708 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.445712 | orchestrator | 2026-01-03 00:50:50.445717 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-01-03 00:50:50.445720 | orchestrator | Saturday 03 January 2026 00:49:36 +0000 (0:00:00.106) 0:03:15.441 ****** 2026-01-03 00:50:50.445725 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.445728 | orchestrator | 2026-01-03 00:50:50.445732 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-01-03 00:50:50.445736 | orchestrator | Saturday 03 January 2026 00:49:36 +0000 (0:00:00.102) 0:03:15.544 ****** 2026-01-03 00:50:50.445740 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.445744 | orchestrator | 2026-01-03 00:50:50.445748 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-01-03 00:50:50.445752 | orchestrator | Saturday 03 January 2026 00:49:36 +0000 (0:00:00.094) 0:03:15.638 ****** 2026-01-03 00:50:50.445756 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-03 00:50:50.445760 | orchestrator | 2026-01-03 00:50:50.445764 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-01-03 00:50:50.445767 | orchestrator | Saturday 03 January 2026 00:49:41 +0000 (0:00:04.495) 0:03:20.134 ****** 2026-01-03 00:50:50.445771 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-01-03 00:50:50.445781 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-01-03 00:50:50.445785 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-01-03 00:50:50.445791 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-01-03 00:50:50.445799 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-01-03 00:50:50.445805 | orchestrator | 2026-01-03 00:50:50.445811 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-01-03 00:50:50.445818 | orchestrator | Saturday 03 January 2026 00:50:23 +0000 (0:00:42.221) 0:04:02.356 ****** 2026-01-03 00:50:50.445824 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-03 00:50:50.445830 | orchestrator | 2026-01-03 00:50:50.445836 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-01-03 00:50:50.445841 | orchestrator | Saturday 03 January 2026 00:50:24 +0000 (0:00:01.097) 0:04:03.453 ****** 2026-01-03 00:50:50.445847 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-03 00:50:50.445854 | orchestrator | 2026-01-03 00:50:50.445860 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-01-03 00:50:50.445867 | orchestrator | Saturday 03 January 2026 00:50:25 +0000 (0:00:01.421) 0:04:04.875 ****** 2026-01-03 00:50:50.445874 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-03 00:50:50.445886 | orchestrator | 2026-01-03 00:50:50.446448 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-01-03 00:50:50.446538 | orchestrator | Saturday 03 January 2026 00:50:26 +0000 (0:00:00.961) 0:04:05.837 ****** 2026-01-03 00:50:50.446549 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.446557 | orchestrator | 2026-01-03 00:50:50.446563 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-01-03 00:50:50.446570 | orchestrator | Saturday 03 January 2026 00:50:26 +0000 (0:00:00.122) 0:04:05.959 ****** 2026-01-03 00:50:50.446575 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-01-03 00:50:50.446580 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-01-03 00:50:50.446584 | orchestrator | 2026-01-03 00:50:50.446588 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-01-03 00:50:50.446593 | orchestrator | Saturday 03 January 2026 00:50:28 +0000 (0:00:01.777) 0:04:07.737 ****** 2026-01-03 00:50:50.446596 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.446600 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.446604 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.446608 | orchestrator | 2026-01-03 00:50:50.446616 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-01-03 00:50:50.446620 | orchestrator | Saturday 03 January 2026 00:50:29 +0000 (0:00:00.375) 0:04:08.113 ****** 2026-01-03 00:50:50.446624 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:50:50.446628 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:50:50.446632 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:50:50.446642 | orchestrator | 2026-01-03 00:50:50.446646 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-01-03 00:50:50.446650 | orchestrator | 2026-01-03 00:50:50.446654 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-01-03 00:50:50.446658 | orchestrator | Saturday 03 January 2026 00:50:30 +0000 (0:00:01.323) 0:04:09.436 ****** 2026-01-03 00:50:50.446662 | orchestrator | ok: [testbed-manager] 2026-01-03 00:50:50.446666 | orchestrator | 2026-01-03 00:50:50.446669 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-01-03 00:50:50.446673 | orchestrator | Saturday 03 January 2026 00:50:30 +0000 (0:00:00.121) 0:04:09.558 ****** 2026-01-03 00:50:50.446677 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-01-03 00:50:50.446681 | orchestrator | 2026-01-03 00:50:50.446685 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-01-03 00:50:50.446689 | orchestrator | Saturday 03 January 2026 00:50:30 +0000 (0:00:00.208) 0:04:09.766 ****** 2026-01-03 00:50:50.446693 | orchestrator | changed: [testbed-manager] 2026-01-03 00:50:50.446696 | orchestrator | 2026-01-03 00:50:50.446700 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-01-03 00:50:50.446704 | orchestrator | 2026-01-03 00:50:50.446708 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-01-03 00:50:50.446712 | orchestrator | Saturday 03 January 2026 00:50:36 +0000 (0:00:05.465) 0:04:15.232 ****** 2026-01-03 00:50:50.446716 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:50:50.446722 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:50:50.446728 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:50:50.446759 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:50:50.446767 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:50:50.446772 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:50:50.446778 | orchestrator | 2026-01-03 00:50:50.446784 | orchestrator | TASK [Manage labels] *********************************************************** 2026-01-03 00:50:50.446790 | orchestrator | Saturday 03 January 2026 00:50:37 +0000 (0:00:00.995) 0:04:16.228 ****** 2026-01-03 00:50:50.446796 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-03 00:50:50.446801 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-03 00:50:50.446820 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-03 00:50:50.446826 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-03 00:50:50.446832 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-03 00:50:50.446838 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-03 00:50:50.446844 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-03 00:50:50.446850 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-03 00:50:50.446869 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-03 00:50:50.446875 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-03 00:50:50.446881 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-03 00:50:50.446888 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-03 00:50:50.446892 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-03 00:50:50.446925 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-03 00:50:50.446931 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-03 00:50:50.446937 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-03 00:50:50.446942 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-03 00:50:50.446947 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-03 00:50:50.446953 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-03 00:50:50.446959 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-03 00:50:50.446965 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-03 00:50:50.446970 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-03 00:50:50.446976 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-03 00:50:50.446983 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-03 00:50:50.446989 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-03 00:50:50.446994 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-03 00:50:50.447001 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-03 00:50:50.447007 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-03 00:50:50.447019 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-03 00:50:50.447026 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-03 00:50:50.447032 | orchestrator | 2026-01-03 00:50:50.447038 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-01-03 00:50:50.447044 | orchestrator | Saturday 03 January 2026 00:50:47 +0000 (0:00:10.750) 0:04:26.979 ****** 2026-01-03 00:50:50.447050 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:50:50.447056 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:50:50.447062 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:50:50.447068 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.447072 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.447075 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.447087 | orchestrator | 2026-01-03 00:50:50.447091 | orchestrator | TASK [Manage taints] *********************************************************** 2026-01-03 00:50:50.447095 | orchestrator | Saturday 03 January 2026 00:50:48 +0000 (0:00:00.543) 0:04:27.523 ****** 2026-01-03 00:50:50.447099 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:50:50.447102 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:50:50.447106 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:50:50.447110 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:50:50.447114 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:50:50.447118 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:50:50.447122 | orchestrator | 2026-01-03 00:50:50.447125 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:50:50.447129 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:50:50.447136 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-03 00:50:50.447141 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-03 00:50:50.447145 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-03 00:50:50.447149 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-03 00:50:50.447152 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-03 00:50:50.447156 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-03 00:50:50.447160 | orchestrator | 2026-01-03 00:50:50.447164 | orchestrator | 2026-01-03 00:50:50.447168 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:50:50.447177 | orchestrator | Saturday 03 January 2026 00:50:48 +0000 (0:00:00.424) 0:04:27.947 ****** 2026-01-03 00:50:50.447181 | orchestrator | =============================================================================== 2026-01-03 00:50:50.447185 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.04s 2026-01-03 00:50:50.447189 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.22s 2026-01-03 00:50:50.447193 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.34s 2026-01-03 00:50:50.447197 | orchestrator | kubectl : Install required packages ------------------------------------ 16.45s 2026-01-03 00:50:50.447201 | orchestrator | Manage labels ---------------------------------------------------------- 10.75s 2026-01-03 00:50:50.447204 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.69s 2026-01-03 00:50:50.447208 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.38s 2026-01-03 00:50:50.447212 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.47s 2026-01-03 00:50:50.447216 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 4.85s 2026-01-03 00:50:50.447220 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.50s 2026-01-03 00:50:50.447223 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.11s 2026-01-03 00:50:50.447227 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.61s 2026-01-03 00:50:50.447231 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.45s 2026-01-03 00:50:50.447235 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.04s 2026-01-03 00:50:50.447244 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.01s 2026-01-03 00:50:50.447273 | orchestrator | k3s_server : Copy K3s service file -------------------------------------- 1.83s 2026-01-03 00:50:50.447278 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.78s 2026-01-03 00:50:50.447282 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.78s 2026-01-03 00:50:50.447286 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.73s 2026-01-03 00:50:50.447290 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.62s 2026-01-03 00:50:50.447298 | orchestrator | 2026-01-03 00:50:50 | INFO  | Task 02a63a4e-7f00-4447-a795-1aa665c12ecf is in state STARTED 2026-01-03 00:50:50.447302 | orchestrator | 2026-01-03 00:50:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:53.482826 | orchestrator | 2026-01-03 00:50:53 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:50:53.484690 | orchestrator | 2026-01-03 00:50:53 | INFO  | Task 86a947d5-d611-4d34-b2d6-c7c9b1c01342 is in state STARTED 2026-01-03 00:50:53.486971 | orchestrator | 2026-01-03 00:50:53 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:50:53.489037 | orchestrator | 2026-01-03 00:50:53 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:50:53.490571 | orchestrator | 2026-01-03 00:50:53 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:50:53.492698 | orchestrator | 2026-01-03 00:50:53 | INFO  | Task 02a63a4e-7f00-4447-a795-1aa665c12ecf is in state STARTED 2026-01-03 00:50:53.492739 | orchestrator | 2026-01-03 00:50:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:56.538800 | orchestrator | 2026-01-03 00:50:56 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:50:56.539611 | orchestrator | 2026-01-03 00:50:56 | INFO  | Task 86a947d5-d611-4d34-b2d6-c7c9b1c01342 is in state STARTED 2026-01-03 00:50:56.540931 | orchestrator | 2026-01-03 00:50:56 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:50:56.543298 | orchestrator | 2026-01-03 00:50:56 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:50:56.544296 | orchestrator | 2026-01-03 00:50:56 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:50:56.545332 | orchestrator | 2026-01-03 00:50:56 | INFO  | Task 02a63a4e-7f00-4447-a795-1aa665c12ecf is in state STARTED 2026-01-03 00:50:56.545365 | orchestrator | 2026-01-03 00:50:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:50:59.589800 | orchestrator | 2026-01-03 00:50:59 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:50:59.592427 | orchestrator | 2026-01-03 00:50:59 | INFO  | Task 86a947d5-d611-4d34-b2d6-c7c9b1c01342 is in state STARTED 2026-01-03 00:50:59.594181 | orchestrator | 2026-01-03 00:50:59 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:50:59.596140 | orchestrator | 2026-01-03 00:50:59 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:50:59.597878 | orchestrator | 2026-01-03 00:50:59 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:50:59.599696 | orchestrator | 2026-01-03 00:50:59 | INFO  | Task 02a63a4e-7f00-4447-a795-1aa665c12ecf is in state SUCCESS 2026-01-03 00:50:59.599771 | orchestrator | 2026-01-03 00:50:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:02.709128 | orchestrator | 2026-01-03 00:51:02 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:51:02.709222 | orchestrator | 2026-01-03 00:51:02 | INFO  | Task 86a947d5-d611-4d34-b2d6-c7c9b1c01342 is in state STARTED 2026-01-03 00:51:02.709276 | orchestrator | 2026-01-03 00:51:02 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:51:02.709283 | orchestrator | 2026-01-03 00:51:02 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:51:02.709290 | orchestrator | 2026-01-03 00:51:02 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:51:02.709296 | orchestrator | 2026-01-03 00:51:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:05.701214 | orchestrator | 2026-01-03 00:51:05 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:51:05.702501 | orchestrator | 2026-01-03 00:51:05 | INFO  | Task 86a947d5-d611-4d34-b2d6-c7c9b1c01342 is in state SUCCESS 2026-01-03 00:51:05.703495 | orchestrator | 2026-01-03 00:51:05 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:51:05.705672 | orchestrator | 2026-01-03 00:51:05 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:51:05.706488 | orchestrator | 2026-01-03 00:51:05 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:51:05.706530 | orchestrator | 2026-01-03 00:51:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:08.749337 | orchestrator | 2026-01-03 00:51:08 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:51:08.751959 | orchestrator | 2026-01-03 00:51:08 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:51:08.755098 | orchestrator | 2026-01-03 00:51:08 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:51:08.757965 | orchestrator | 2026-01-03 00:51:08 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:51:08.758543 | orchestrator | 2026-01-03 00:51:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:11.813246 | orchestrator | 2026-01-03 00:51:11 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:51:11.815149 | orchestrator | 2026-01-03 00:51:11 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:51:11.816498 | orchestrator | 2026-01-03 00:51:11 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:51:11.818405 | orchestrator | 2026-01-03 00:51:11 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:51:11.818452 | orchestrator | 2026-01-03 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:14.874295 | orchestrator | 2026-01-03 00:51:14 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:51:14.874475 | orchestrator | 2026-01-03 00:51:14 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:51:14.875693 | orchestrator | 2026-01-03 00:51:14 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:51:14.876113 | orchestrator | 2026-01-03 00:51:14 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:51:14.878129 | orchestrator | 2026-01-03 00:51:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:17.923030 | orchestrator | 2026-01-03 00:51:17 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:51:17.924684 | orchestrator | 2026-01-03 00:51:17 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:51:17.926670 | orchestrator | 2026-01-03 00:51:17 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:51:17.928685 | orchestrator | 2026-01-03 00:51:17 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:51:17.928986 | orchestrator | 2026-01-03 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:20.963433 | orchestrator | 2026-01-03 00:51:20 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:51:20.964804 | orchestrator | 2026-01-03 00:51:20 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:51:20.965783 | orchestrator | 2026-01-03 00:51:20 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:51:20.966743 | orchestrator | 2026-01-03 00:51:20 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:51:20.966974 | orchestrator | 2026-01-03 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:24.042539 | orchestrator | 2026-01-03 00:51:23 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:51:24.042615 | orchestrator | 2026-01-03 00:51:23 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:51:24.042621 | orchestrator | 2026-01-03 00:51:23 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:51:24.042625 | orchestrator | 2026-01-03 00:51:24 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:51:24.042629 | orchestrator | 2026-01-03 00:51:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:27.045581 | orchestrator | 2026-01-03 00:51:27 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:51:27.046683 | orchestrator | 2026-01-03 00:51:27 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:51:27.048121 | orchestrator | 2026-01-03 00:51:27 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:51:27.049539 | orchestrator | 2026-01-03 00:51:27 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:51:27.049705 | orchestrator | 2026-01-03 00:51:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:30.094592 | orchestrator | 2026-01-03 00:51:30 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:51:30.095833 | orchestrator | 2026-01-03 00:51:30 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:51:30.096762 | orchestrator | 2026-01-03 00:51:30 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:51:30.097619 | orchestrator | 2026-01-03 00:51:30 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:51:30.097813 | orchestrator | 2026-01-03 00:51:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:33.134854 | orchestrator | 2026-01-03 00:51:33 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:51:33.137247 | orchestrator | 2026-01-03 00:51:33 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:51:33.138711 | orchestrator | 2026-01-03 00:51:33 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:51:33.140388 | orchestrator | 2026-01-03 00:51:33 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:51:33.140483 | orchestrator | 2026-01-03 00:51:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:36.184851 | orchestrator | 2026-01-03 00:51:36 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:51:36.187097 | orchestrator | 2026-01-03 00:51:36 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:51:36.189710 | orchestrator | 2026-01-03 00:51:36 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:51:36.191762 | orchestrator | 2026-01-03 00:51:36 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:51:36.191881 | orchestrator | 2026-01-03 00:51:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:39.227992 | orchestrator | 2026-01-03 00:51:39 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:51:39.228085 | orchestrator | 2026-01-03 00:51:39 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:51:39.228801 | orchestrator | 2026-01-03 00:51:39 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:51:39.229441 | orchestrator | 2026-01-03 00:51:39 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:51:39.229468 | orchestrator | 2026-01-03 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:42.278087 | orchestrator | 2026-01-03 00:51:42 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:51:42.281639 | orchestrator | 2026-01-03 00:51:42 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:51:42.282426 | orchestrator | 2026-01-03 00:51:42 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:51:42.283811 | orchestrator | 2026-01-03 00:51:42 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:51:42.283850 | orchestrator | 2026-01-03 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:45.333524 | orchestrator | 2026-01-03 00:51:45 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:51:45.334196 | orchestrator | 2026-01-03 00:51:45 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:51:45.334759 | orchestrator | 2026-01-03 00:51:45 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:51:45.335985 | orchestrator | 2026-01-03 00:51:45 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:51:45.336007 | orchestrator | 2026-01-03 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:48.385312 | orchestrator | 2026-01-03 00:51:48 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:51:48.388250 | orchestrator | 2026-01-03 00:51:48 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:51:48.389223 | orchestrator | 2026-01-03 00:51:48 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state STARTED 2026-01-03 00:51:48.389884 | orchestrator | 2026-01-03 00:51:48 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:51:48.389918 | orchestrator | 2026-01-03 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:51.426342 | orchestrator | 2026-01-03 00:51:51 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:51:51.427305 | orchestrator | 2026-01-03 00:51:51 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:51:51.428107 | orchestrator | 2026-01-03 00:51:51 | INFO  | Task 6325ad46-54bf-4eb5-9f77-ff27d7e33880 is in state SUCCESS 2026-01-03 00:51:51.433311 | orchestrator | 2026-01-03 00:51:51.433401 | orchestrator | 2026-01-03 00:51:51.433411 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-01-03 00:51:51.433444 | orchestrator | 2026-01-03 00:51:51.433450 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-03 00:51:51.433457 | orchestrator | Saturday 03 January 2026 00:50:54 +0000 (0:00:00.206) 0:00:00.206 ****** 2026-01-03 00:51:51.433464 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-03 00:51:51.433470 | orchestrator | 2026-01-03 00:51:51.433475 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-03 00:51:51.433481 | orchestrator | Saturday 03 January 2026 00:50:54 +0000 (0:00:00.805) 0:00:01.012 ****** 2026-01-03 00:51:51.433487 | orchestrator | changed: [testbed-manager] 2026-01-03 00:51:51.433493 | orchestrator | 2026-01-03 00:51:51.433498 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-01-03 00:51:51.433503 | orchestrator | Saturday 03 January 2026 00:50:56 +0000 (0:00:01.231) 0:00:02.243 ****** 2026-01-03 00:51:51.433509 | orchestrator | changed: [testbed-manager] 2026-01-03 00:51:51.433515 | orchestrator | 2026-01-03 00:51:51.433521 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:51:51.433527 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:51:51.433535 | orchestrator | 2026-01-03 00:51:51.433540 | orchestrator | 2026-01-03 00:51:51.433547 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:51:51.433553 | orchestrator | Saturday 03 January 2026 00:50:56 +0000 (0:00:00.644) 0:00:02.887 ****** 2026-01-03 00:51:51.433572 | orchestrator | =============================================================================== 2026-01-03 00:51:51.433586 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.23s 2026-01-03 00:51:51.433592 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.81s 2026-01-03 00:51:51.433598 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.64s 2026-01-03 00:51:51.433604 | orchestrator | 2026-01-03 00:51:51.433610 | orchestrator | 2026-01-03 00:51:51.433616 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-03 00:51:51.433622 | orchestrator | 2026-01-03 00:51:51.433627 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-03 00:51:51.433633 | orchestrator | Saturday 03 January 2026 00:50:53 +0000 (0:00:00.171) 0:00:00.171 ****** 2026-01-03 00:51:51.433639 | orchestrator | ok: [testbed-manager] 2026-01-03 00:51:51.433647 | orchestrator | 2026-01-03 00:51:51.433653 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-03 00:51:51.433660 | orchestrator | Saturday 03 January 2026 00:50:54 +0000 (0:00:00.743) 0:00:00.915 ****** 2026-01-03 00:51:51.433666 | orchestrator | ok: [testbed-manager] 2026-01-03 00:51:51.433672 | orchestrator | 2026-01-03 00:51:51.433678 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-03 00:51:51.433685 | orchestrator | Saturday 03 January 2026 00:50:55 +0000 (0:00:00.645) 0:00:01.560 ****** 2026-01-03 00:51:51.433691 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-03 00:51:51.433698 | orchestrator | 2026-01-03 00:51:51.433723 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-03 00:51:51.433730 | orchestrator | Saturday 03 January 2026 00:50:56 +0000 (0:00:00.778) 0:00:02.339 ****** 2026-01-03 00:51:51.433737 | orchestrator | changed: [testbed-manager] 2026-01-03 00:51:51.433744 | orchestrator | 2026-01-03 00:51:51.433751 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-03 00:51:51.433758 | orchestrator | Saturday 03 January 2026 00:50:57 +0000 (0:00:01.682) 0:00:04.021 ****** 2026-01-03 00:51:51.433765 | orchestrator | changed: [testbed-manager] 2026-01-03 00:51:51.433770 | orchestrator | 2026-01-03 00:51:51.433776 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-03 00:51:51.433782 | orchestrator | Saturday 03 January 2026 00:50:58 +0000 (0:00:00.624) 0:00:04.646 ****** 2026-01-03 00:51:51.433797 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-03 00:51:51.433803 | orchestrator | 2026-01-03 00:51:51.433809 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-03 00:51:51.433815 | orchestrator | Saturday 03 January 2026 00:51:00 +0000 (0:00:01.619) 0:00:06.265 ****** 2026-01-03 00:51:51.433821 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-03 00:51:51.433826 | orchestrator | 2026-01-03 00:51:51.433833 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-03 00:51:51.433852 | orchestrator | Saturday 03 January 2026 00:51:00 +0000 (0:00:00.897) 0:00:07.163 ****** 2026-01-03 00:51:51.433858 | orchestrator | ok: [testbed-manager] 2026-01-03 00:51:51.433864 | orchestrator | 2026-01-03 00:51:51.433869 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-03 00:51:51.433888 | orchestrator | Saturday 03 January 2026 00:51:01 +0000 (0:00:00.652) 0:00:07.816 ****** 2026-01-03 00:51:51.433894 | orchestrator | ok: [testbed-manager] 2026-01-03 00:51:51.433900 | orchestrator | 2026-01-03 00:51:51.433906 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:51:51.433912 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:51:51.433918 | orchestrator | 2026-01-03 00:51:51.433924 | orchestrator | 2026-01-03 00:51:51.433931 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:51:51.433937 | orchestrator | Saturday 03 January 2026 00:51:02 +0000 (0:00:00.529) 0:00:08.345 ****** 2026-01-03 00:51:51.433944 | orchestrator | =============================================================================== 2026-01-03 00:51:51.433950 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.68s 2026-01-03 00:51:51.433968 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.62s 2026-01-03 00:51:51.433976 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.90s 2026-01-03 00:51:51.434001 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.78s 2026-01-03 00:51:51.434008 | orchestrator | Get home directory of operator user ------------------------------------- 0.74s 2026-01-03 00:51:51.434058 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.65s 2026-01-03 00:51:51.434065 | orchestrator | Create .kube directory -------------------------------------------------- 0.65s 2026-01-03 00:51:51.434071 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.62s 2026-01-03 00:51:51.434078 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.53s 2026-01-03 00:51:51.434084 | orchestrator | 2026-01-03 00:51:51.434090 | orchestrator | 2026-01-03 00:51:51.434097 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-01-03 00:51:51.434103 | orchestrator | 2026-01-03 00:51:51.434109 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-03 00:51:51.434115 | orchestrator | Saturday 03 January 2026 00:49:30 +0000 (0:00:00.332) 0:00:00.332 ****** 2026-01-03 00:51:51.434140 | orchestrator | ok: [localhost] => { 2026-01-03 00:51:51.434148 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-01-03 00:51:51.434154 | orchestrator | } 2026-01-03 00:51:51.434160 | orchestrator | 2026-01-03 00:51:51.434165 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-01-03 00:51:51.434170 | orchestrator | Saturday 03 January 2026 00:49:30 +0000 (0:00:00.157) 0:00:00.490 ****** 2026-01-03 00:51:51.434177 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-01-03 00:51:51.434185 | orchestrator | ...ignoring 2026-01-03 00:51:51.434190 | orchestrator | 2026-01-03 00:51:51.434196 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-01-03 00:51:51.434202 | orchestrator | Saturday 03 January 2026 00:49:33 +0000 (0:00:03.610) 0:00:04.101 ****** 2026-01-03 00:51:51.434217 | orchestrator | skipping: [localhost] 2026-01-03 00:51:51.434223 | orchestrator | 2026-01-03 00:51:51.434229 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-01-03 00:51:51.434235 | orchestrator | Saturday 03 January 2026 00:49:33 +0000 (0:00:00.098) 0:00:04.199 ****** 2026-01-03 00:51:51.434241 | orchestrator | ok: [localhost] 2026-01-03 00:51:51.434248 | orchestrator | 2026-01-03 00:51:51.434254 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:51:51.434260 | orchestrator | 2026-01-03 00:51:51.434266 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 00:51:51.434272 | orchestrator | Saturday 03 January 2026 00:49:34 +0000 (0:00:00.229) 0:00:04.429 ****** 2026-01-03 00:51:51.434278 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:51:51.434284 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:51:51.434290 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:51:51.434297 | orchestrator | 2026-01-03 00:51:51.434303 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 00:51:51.434310 | orchestrator | Saturday 03 January 2026 00:49:34 +0000 (0:00:00.298) 0:00:04.728 ****** 2026-01-03 00:51:51.434317 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-01-03 00:51:51.434324 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-01-03 00:51:51.434331 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-01-03 00:51:51.434338 | orchestrator | 2026-01-03 00:51:51.434345 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-01-03 00:51:51.434351 | orchestrator | 2026-01-03 00:51:51.434358 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-03 00:51:51.434365 | orchestrator | Saturday 03 January 2026 00:49:35 +0000 (0:00:00.785) 0:00:05.513 ****** 2026-01-03 00:51:51.434371 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:51:51.434377 | orchestrator | 2026-01-03 00:51:51.434383 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-03 00:51:51.434388 | orchestrator | Saturday 03 January 2026 00:49:35 +0000 (0:00:00.453) 0:00:05.967 ****** 2026-01-03 00:51:51.434394 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:51:51.434400 | orchestrator | 2026-01-03 00:51:51.434407 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-01-03 00:51:51.434413 | orchestrator | Saturday 03 January 2026 00:49:36 +0000 (0:00:00.870) 0:00:06.837 ****** 2026-01-03 00:51:51.434418 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:51:51.434425 | orchestrator | 2026-01-03 00:51:51.434431 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-01-03 00:51:51.434436 | orchestrator | Saturday 03 January 2026 00:49:36 +0000 (0:00:00.382) 0:00:07.219 ****** 2026-01-03 00:51:51.434442 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:51:51.434448 | orchestrator | 2026-01-03 00:51:51.434453 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-01-03 00:51:51.434459 | orchestrator | Saturday 03 January 2026 00:49:37 +0000 (0:00:00.406) 0:00:07.625 ****** 2026-01-03 00:51:51.434465 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:51:51.434470 | orchestrator | 2026-01-03 00:51:51.434476 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-01-03 00:51:51.434482 | orchestrator | Saturday 03 January 2026 00:49:37 +0000 (0:00:00.387) 0:00:08.012 ****** 2026-01-03 00:51:51.434489 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:51:51.434495 | orchestrator | 2026-01-03 00:51:51.434501 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-03 00:51:51.434507 | orchestrator | Saturday 03 January 2026 00:49:38 +0000 (0:00:00.589) 0:00:08.602 ****** 2026-01-03 00:51:51.434521 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:51:51.434528 | orchestrator | 2026-01-03 00:51:51.434540 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-03 00:51:51.434558 | orchestrator | Saturday 03 January 2026 00:49:38 +0000 (0:00:00.645) 0:00:09.247 ****** 2026-01-03 00:51:51.434565 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:51:51.434571 | orchestrator | 2026-01-03 00:51:51.434577 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-01-03 00:51:51.434583 | orchestrator | Saturday 03 January 2026 00:49:39 +0000 (0:00:00.969) 0:00:10.217 ****** 2026-01-03 00:51:51.434590 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:51:51.434596 | orchestrator | 2026-01-03 00:51:51.434602 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-01-03 00:51:51.434608 | orchestrator | Saturday 03 January 2026 00:49:40 +0000 (0:00:00.550) 0:00:10.767 ****** 2026-01-03 00:51:51.434615 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:51:51.434621 | orchestrator | 2026-01-03 00:51:51.434627 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-01-03 00:51:51.434634 | orchestrator | Saturday 03 January 2026 00:49:41 +0000 (0:00:01.000) 0:00:11.767 ****** 2026-01-03 00:51:51.434646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:51:51.434656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:51:51.434664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:51:51.434677 | orchestrator | 2026-01-03 00:51:51.434683 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-01-03 00:51:51.434693 | orchestrator | Saturday 03 January 2026 00:49:42 +0000 (0:00:01.426) 0:00:13.194 ****** 2026-01-03 00:51:51.434705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:51:51.434712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:51:51.434720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:51:51.434726 | orchestrator | 2026-01-03 00:51:51.434733 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-01-03 00:51:51.434739 | orchestrator | Saturday 03 January 2026 00:49:46 +0000 (0:00:03.519) 0:00:16.713 ****** 2026-01-03 00:51:51.434746 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-03 00:51:51.434757 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-03 00:51:51.434763 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-03 00:51:51.434770 | orchestrator | 2026-01-03 00:51:51.434776 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-01-03 00:51:51.434782 | orchestrator | Saturday 03 January 2026 00:49:47 +0000 (0:00:01.488) 0:00:18.201 ****** 2026-01-03 00:51:51.434788 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-03 00:51:51.434794 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-03 00:51:51.434804 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-03 00:51:51.434811 | orchestrator | 2026-01-03 00:51:51.434817 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-01-03 00:51:51.434827 | orchestrator | Saturday 03 January 2026 00:49:49 +0000 (0:00:01.819) 0:00:20.020 ****** 2026-01-03 00:51:51.434834 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-03 00:51:51.434840 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-03 00:51:51.434846 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-03 00:51:51.434852 | orchestrator | 2026-01-03 00:51:51.434859 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-01-03 00:51:51.434866 | orchestrator | Saturday 03 January 2026 00:49:50 +0000 (0:00:01.229) 0:00:21.249 ****** 2026-01-03 00:51:51.434873 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-03 00:51:51.434879 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-03 00:51:51.434884 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-03 00:51:51.434890 | orchestrator | 2026-01-03 00:51:51.434895 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-01-03 00:51:51.434902 | orchestrator | Saturday 03 January 2026 00:49:53 +0000 (0:00:02.179) 0:00:23.428 ****** 2026-01-03 00:51:51.434908 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-03 00:51:51.434914 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-03 00:51:51.434921 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-03 00:51:51.434927 | orchestrator | 2026-01-03 00:51:51.434934 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-01-03 00:51:51.434940 | orchestrator | Saturday 03 January 2026 00:49:54 +0000 (0:00:01.638) 0:00:25.066 ****** 2026-01-03 00:51:51.434946 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-03 00:51:51.434953 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-03 00:51:51.434959 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-03 00:51:51.434965 | orchestrator | 2026-01-03 00:51:51.434972 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-03 00:51:51.434978 | orchestrator | Saturday 03 January 2026 00:49:56 +0000 (0:00:01.529) 0:00:26.596 ****** 2026-01-03 00:51:51.434984 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:51:51.434991 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:51:51.434997 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:51:51.435003 | orchestrator | 2026-01-03 00:51:51.435010 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-01-03 00:51:51.435021 | orchestrator | Saturday 03 January 2026 00:49:56 +0000 (0:00:00.551) 0:00:27.148 ****** 2026-01-03 00:51:51.435028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:51:51.435043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:51:51.435050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:51:51.435057 | orchestrator | 2026-01-03 00:51:51.435064 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-01-03 00:51:51.435070 | orchestrator | Saturday 03 January 2026 00:49:58 +0000 (0:00:01.740) 0:00:28.888 ****** 2026-01-03 00:51:51.435077 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:51:51.435083 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:51:51.435090 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:51:51.435096 | orchestrator | 2026-01-03 00:51:51.435102 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-01-03 00:51:51.435108 | orchestrator | Saturday 03 January 2026 00:49:59 +0000 (0:00:00.862) 0:00:29.750 ****** 2026-01-03 00:51:51.435114 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:51:51.435185 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:51:51.435195 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:51:51.435205 | orchestrator | 2026-01-03 00:51:51.435214 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-01-03 00:51:51.435225 | orchestrator | Saturday 03 January 2026 00:50:05 +0000 (0:00:06.367) 0:00:36.118 ****** 2026-01-03 00:51:51.435232 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:51:51.435238 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:51:51.435247 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:51:51.435254 | orchestrator | 2026-01-03 00:51:51.435260 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-03 00:51:51.435266 | orchestrator | 2026-01-03 00:51:51.435273 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-03 00:51:51.435279 | orchestrator | Saturday 03 January 2026 00:50:06 +0000 (0:00:00.790) 0:00:36.909 ****** 2026-01-03 00:51:51.435285 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:51:51.435292 | orchestrator | 2026-01-03 00:51:51.435298 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-03 00:51:51.435304 | orchestrator | Saturday 03 January 2026 00:50:07 +0000 (0:00:00.827) 0:00:37.736 ****** 2026-01-03 00:51:51.435310 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:51:51.435317 | orchestrator | 2026-01-03 00:51:51.435323 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-03 00:51:51.435330 | orchestrator | Saturday 03 January 2026 00:50:07 +0000 (0:00:00.348) 0:00:38.084 ****** 2026-01-03 00:51:51.435336 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:51:51.435342 | orchestrator | 2026-01-03 00:51:51.435349 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-03 00:51:51.435356 | orchestrator | Saturday 03 January 2026 00:50:14 +0000 (0:00:06.992) 0:00:45.077 ****** 2026-01-03 00:51:51.435363 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:51:51.435369 | orchestrator | 2026-01-03 00:51:51.435376 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-03 00:51:51.435383 | orchestrator | 2026-01-03 00:51:51.435390 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-03 00:51:51.435396 | orchestrator | Saturday 03 January 2026 00:51:06 +0000 (0:00:52.071) 0:01:37.149 ****** 2026-01-03 00:51:51.435403 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:51:51.435409 | orchestrator | 2026-01-03 00:51:51.435415 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-03 00:51:51.435421 | orchestrator | Saturday 03 January 2026 00:51:07 +0000 (0:00:00.704) 0:01:37.853 ****** 2026-01-03 00:51:51.435428 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:51:51.435435 | orchestrator | 2026-01-03 00:51:51.435441 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-03 00:51:51.435447 | orchestrator | Saturday 03 January 2026 00:51:07 +0000 (0:00:00.349) 0:01:38.203 ****** 2026-01-03 00:51:51.435453 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:51:51.435458 | orchestrator | 2026-01-03 00:51:51.435464 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-03 00:51:51.435470 | orchestrator | Saturday 03 January 2026 00:51:14 +0000 (0:00:06.861) 0:01:45.065 ****** 2026-01-03 00:51:51.435475 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:51:51.435481 | orchestrator | 2026-01-03 00:51:51.435487 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-03 00:51:51.435493 | orchestrator | 2026-01-03 00:51:51.435510 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-03 00:51:51.435516 | orchestrator | Saturday 03 January 2026 00:51:24 +0000 (0:00:10.078) 0:01:55.143 ****** 2026-01-03 00:51:51.435523 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:51:51.435529 | orchestrator | 2026-01-03 00:51:51.435541 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-03 00:51:51.435548 | orchestrator | Saturday 03 January 2026 00:51:25 +0000 (0:00:00.662) 0:01:55.806 ****** 2026-01-03 00:51:51.435561 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:51:51.435568 | orchestrator | 2026-01-03 00:51:51.435574 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-03 00:51:51.435580 | orchestrator | Saturday 03 January 2026 00:51:25 +0000 (0:00:00.196) 0:01:56.002 ****** 2026-01-03 00:51:51.435587 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:51:51.435593 | orchestrator | 2026-01-03 00:51:51.435599 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-03 00:51:51.435606 | orchestrator | Saturday 03 January 2026 00:51:32 +0000 (0:00:06.672) 0:02:02.675 ****** 2026-01-03 00:51:51.435612 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:51:51.435618 | orchestrator | 2026-01-03 00:51:51.435624 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-01-03 00:51:51.435631 | orchestrator | 2026-01-03 00:51:51.435637 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-01-03 00:51:51.435644 | orchestrator | Saturday 03 January 2026 00:51:44 +0000 (0:00:12.441) 0:02:15.117 ****** 2026-01-03 00:51:51.435650 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:51:51.435657 | orchestrator | 2026-01-03 00:51:51.435663 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-03 00:51:51.435670 | orchestrator | Saturday 03 January 2026 00:51:45 +0000 (0:00:00.536) 0:02:15.654 ****** 2026-01-03 00:51:51.435676 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-03 00:51:51.435682 | orchestrator | enable_outward_rabbitmq_True 2026-01-03 00:51:51.435689 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-03 00:51:51.435695 | orchestrator | outward_rabbitmq_restart 2026-01-03 00:51:51.435702 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:51:51.435708 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:51:51.435715 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:51:51.435721 | orchestrator | 2026-01-03 00:51:51.435727 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-01-03 00:51:51.435733 | orchestrator | skipping: no hosts matched 2026-01-03 00:51:51.435739 | orchestrator | 2026-01-03 00:51:51.435746 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-01-03 00:51:51.435752 | orchestrator | skipping: no hosts matched 2026-01-03 00:51:51.435759 | orchestrator | 2026-01-03 00:51:51.435765 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-01-03 00:51:51.435771 | orchestrator | skipping: no hosts matched 2026-01-03 00:51:51.435778 | orchestrator | 2026-01-03 00:51:51.435784 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:51:51.435791 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-03 00:51:51.435798 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-03 00:51:51.435805 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:51:51.435811 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 00:51:51.435817 | orchestrator | 2026-01-03 00:51:51.435824 | orchestrator | 2026-01-03 00:51:51.435830 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:51:51.435836 | orchestrator | Saturday 03 January 2026 00:51:48 +0000 (0:00:03.154) 0:02:18.808 ****** 2026-01-03 00:51:51.435843 | orchestrator | =============================================================================== 2026-01-03 00:51:51.435849 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 74.59s 2026-01-03 00:51:51.435855 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 20.53s 2026-01-03 00:51:51.435866 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.37s 2026-01-03 00:51:51.435873 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.61s 2026-01-03 00:51:51.435879 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.52s 2026-01-03 00:51:51.435885 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.15s 2026-01-03 00:51:51.435892 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.19s 2026-01-03 00:51:51.435898 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.18s 2026-01-03 00:51:51.435904 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.82s 2026-01-03 00:51:51.435910 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.74s 2026-01-03 00:51:51.435917 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.64s 2026-01-03 00:51:51.435923 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.53s 2026-01-03 00:51:51.435929 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.49s 2026-01-03 00:51:51.435936 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.43s 2026-01-03 00:51:51.435942 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.23s 2026-01-03 00:51:51.435951 | orchestrator | rabbitmq : Remove ha-all policy from RabbitMQ --------------------------- 1.00s 2026-01-03 00:51:51.435958 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.97s 2026-01-03 00:51:51.435968 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.89s 2026-01-03 00:51:51.435974 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.87s 2026-01-03 00:51:51.435980 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.86s 2026-01-03 00:51:51.435987 | orchestrator | 2026-01-03 00:51:51 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:51:51.435993 | orchestrator | 2026-01-03 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:54.462208 | orchestrator | 2026-01-03 00:51:54 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:51:54.463603 | orchestrator | 2026-01-03 00:51:54 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:51:54.464911 | orchestrator | 2026-01-03 00:51:54 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:51:54.464967 | orchestrator | 2026-01-03 00:51:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:51:57.512289 | orchestrator | 2026-01-03 00:51:57 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:51:57.515854 | orchestrator | 2026-01-03 00:51:57 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:51:57.518757 | orchestrator | 2026-01-03 00:51:57 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:51:57.518809 | orchestrator | 2026-01-03 00:51:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:00.554058 | orchestrator | 2026-01-03 00:52:00 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:52:00.554608 | orchestrator | 2026-01-03 00:52:00 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:52:00.555678 | orchestrator | 2026-01-03 00:52:00 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:52:00.555713 | orchestrator | 2026-01-03 00:52:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:03.597293 | orchestrator | 2026-01-03 00:52:03 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:52:03.597828 | orchestrator | 2026-01-03 00:52:03 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:52:03.599505 | orchestrator | 2026-01-03 00:52:03 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:52:03.599534 | orchestrator | 2026-01-03 00:52:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:06.639379 | orchestrator | 2026-01-03 00:52:06 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:52:06.639941 | orchestrator | 2026-01-03 00:52:06 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:52:06.640905 | orchestrator | 2026-01-03 00:52:06 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:52:06.640928 | orchestrator | 2026-01-03 00:52:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:09.690956 | orchestrator | 2026-01-03 00:52:09 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:52:09.693525 | orchestrator | 2026-01-03 00:52:09 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:52:09.696689 | orchestrator | 2026-01-03 00:52:09 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:52:09.698217 | orchestrator | 2026-01-03 00:52:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:12.734334 | orchestrator | 2026-01-03 00:52:12 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:52:12.734795 | orchestrator | 2026-01-03 00:52:12 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:52:12.735772 | orchestrator | 2026-01-03 00:52:12 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:52:12.735816 | orchestrator | 2026-01-03 00:52:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:15.783416 | orchestrator | 2026-01-03 00:52:15 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:52:15.784110 | orchestrator | 2026-01-03 00:52:15 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:52:15.786092 | orchestrator | 2026-01-03 00:52:15 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:52:15.786122 | orchestrator | 2026-01-03 00:52:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:18.832529 | orchestrator | 2026-01-03 00:52:18 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:52:18.833840 | orchestrator | 2026-01-03 00:52:18 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:52:18.835497 | orchestrator | 2026-01-03 00:52:18 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:52:18.835586 | orchestrator | 2026-01-03 00:52:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:21.881403 | orchestrator | 2026-01-03 00:52:21 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:52:21.882865 | orchestrator | 2026-01-03 00:52:21 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:52:21.884877 | orchestrator | 2026-01-03 00:52:21 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:52:21.884934 | orchestrator | 2026-01-03 00:52:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:24.925366 | orchestrator | 2026-01-03 00:52:24 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:52:24.927723 | orchestrator | 2026-01-03 00:52:24 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:52:24.930116 | orchestrator | 2026-01-03 00:52:24 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:52:24.930195 | orchestrator | 2026-01-03 00:52:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:27.985000 | orchestrator | 2026-01-03 00:52:27 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:52:27.988935 | orchestrator | 2026-01-03 00:52:27 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:52:27.989969 | orchestrator | 2026-01-03 00:52:27 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:52:27.990448 | orchestrator | 2026-01-03 00:52:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:31.038497 | orchestrator | 2026-01-03 00:52:31 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:52:31.040464 | orchestrator | 2026-01-03 00:52:31 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:52:31.041525 | orchestrator | 2026-01-03 00:52:31 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:52:31.041561 | orchestrator | 2026-01-03 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:34.089549 | orchestrator | 2026-01-03 00:52:34 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:52:34.092107 | orchestrator | 2026-01-03 00:52:34 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:52:34.094711 | orchestrator | 2026-01-03 00:52:34 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:52:34.094951 | orchestrator | 2026-01-03 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:37.140301 | orchestrator | 2026-01-03 00:52:37 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:52:37.141303 | orchestrator | 2026-01-03 00:52:37 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:52:37.143136 | orchestrator | 2026-01-03 00:52:37 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:52:37.143281 | orchestrator | 2026-01-03 00:52:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:40.183359 | orchestrator | 2026-01-03 00:52:40 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:52:40.185061 | orchestrator | 2026-01-03 00:52:40 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:52:40.187098 | orchestrator | 2026-01-03 00:52:40 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:52:40.187219 | orchestrator | 2026-01-03 00:52:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:43.230220 | orchestrator | 2026-01-03 00:52:43 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:52:43.230269 | orchestrator | 2026-01-03 00:52:43 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:52:43.231352 | orchestrator | 2026-01-03 00:52:43 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state STARTED 2026-01-03 00:52:43.231377 | orchestrator | 2026-01-03 00:52:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:46.272643 | orchestrator | 2026-01-03 00:52:46 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:52:46.273446 | orchestrator | 2026-01-03 00:52:46 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:52:46.276808 | orchestrator | 2026-01-03 00:52:46.276872 | orchestrator | 2026-01-03 00:52:46.276880 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:52:46.276886 | orchestrator | 2026-01-03 00:52:46.276890 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 00:52:46.276895 | orchestrator | Saturday 03 January 2026 00:50:19 +0000 (0:00:00.203) 0:00:00.203 ****** 2026-01-03 00:52:46.276900 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:52:46.276905 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:52:46.276909 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:52:46.276927 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:46.276932 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:46.276936 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:46.276940 | orchestrator | 2026-01-03 00:52:46.276944 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 00:52:46.276949 | orchestrator | Saturday 03 January 2026 00:50:20 +0000 (0:00:00.841) 0:00:01.045 ****** 2026-01-03 00:52:46.276953 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-01-03 00:52:46.276958 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-01-03 00:52:46.276962 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-01-03 00:52:46.276966 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-01-03 00:52:46.276970 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-01-03 00:52:46.276974 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-01-03 00:52:46.276978 | orchestrator | 2026-01-03 00:52:46.276982 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-01-03 00:52:46.276988 | orchestrator | 2026-01-03 00:52:46.276995 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-01-03 00:52:46.277099 | orchestrator | Saturday 03 January 2026 00:50:21 +0000 (0:00:01.439) 0:00:02.485 ****** 2026-01-03 00:52:46.277106 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:52:46.277111 | orchestrator | 2026-01-03 00:52:46.277115 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-01-03 00:52:46.277119 | orchestrator | Saturday 03 January 2026 00:50:23 +0000 (0:00:01.486) 0:00:03.971 ****** 2026-01-03 00:52:46.277125 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277158 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277163 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277206 | orchestrator | 2026-01-03 00:52:46.277222 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-01-03 00:52:46.277226 | orchestrator | Saturday 03 January 2026 00:50:24 +0000 (0:00:01.298) 0:00:05.269 ****** 2026-01-03 00:52:46.277230 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277235 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277239 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277255 | orchestrator | 2026-01-03 00:52:46.277259 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-01-03 00:52:46.277267 | orchestrator | Saturday 03 January 2026 00:50:26 +0000 (0:00:01.600) 0:00:06.870 ****** 2026-01-03 00:52:46.277271 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277278 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277291 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277308 | orchestrator | 2026-01-03 00:52:46.277312 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-01-03 00:52:46.277316 | orchestrator | Saturday 03 January 2026 00:50:27 +0000 (0:00:01.357) 0:00:08.228 ****** 2026-01-03 00:52:46.277320 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277326 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277333 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277367 | orchestrator | 2026-01-03 00:52:46.277377 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-01-03 00:52:46.277384 | orchestrator | Saturday 03 January 2026 00:50:29 +0000 (0:00:01.977) 0:00:10.205 ****** 2026-01-03 00:52:46.277391 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277399 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277406 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.277437 | orchestrator | 2026-01-03 00:52:46.277442 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-01-03 00:52:46.277447 | orchestrator | Saturday 03 January 2026 00:50:31 +0000 (0:00:01.931) 0:00:12.137 ****** 2026-01-03 00:52:46.277452 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:52:46.277456 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:52:46.277461 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:52:46.277466 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:52:46.277471 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:52:46.277475 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:52:46.277480 | orchestrator | 2026-01-03 00:52:46.277485 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-01-03 00:52:46.277489 | orchestrator | Saturday 03 January 2026 00:50:34 +0000 (0:00:03.189) 0:00:15.326 ****** 2026-01-03 00:52:46.277494 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-01-03 00:52:46.277500 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-01-03 00:52:46.277504 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-03 00:52:46.277509 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-01-03 00:52:46.277516 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-01-03 00:52:46.277521 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-01-03 00:52:46.277526 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-01-03 00:52:46.277530 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-03 00:52:46.277538 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-03 00:52:46.277544 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-03 00:52:46.277549 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-03 00:52:46.277553 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-03 00:52:46.277558 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-03 00:52:46.277562 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-03 00:52:46.277567 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-03 00:52:46.277574 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-03 00:52:46.277578 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-03 00:52:46.277583 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-03 00:52:46.277591 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-03 00:52:46.277596 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-03 00:52:46.277601 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-03 00:52:46.277605 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-03 00:52:46.277610 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-03 00:52:46.277614 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-03 00:52:46.277620 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-03 00:52:46.277624 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-03 00:52:46.277629 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-03 00:52:46.277634 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-03 00:52:46.277638 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-03 00:52:46.277642 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-03 00:52:46.277646 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-03 00:52:46.277650 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-03 00:52:46.277654 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-03 00:52:46.277658 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-03 00:52:46.277662 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-03 00:52:46.277666 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-03 00:52:46.277670 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-03 00:52:46.277674 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-03 00:52:46.277678 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-03 00:52:46.277682 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-01-03 00:52:46.277686 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-03 00:52:46.277693 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-03 00:52:46.277697 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-03 00:52:46.277701 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-01-03 00:52:46.277708 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-03 00:52:46.277712 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-01-03 00:52:46.277716 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-01-03 00:52:46.277724 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-01-03 00:52:46.277728 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-01-03 00:52:46.277732 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-03 00:52:46.277736 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-03 00:52:46.277740 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-03 00:52:46.277744 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-03 00:52:46.277748 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-03 00:52:46.277752 | orchestrator | 2026-01-03 00:52:46.277756 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-03 00:52:46.277760 | orchestrator | Saturday 03 January 2026 00:50:56 +0000 (0:00:21.452) 0:00:36.779 ****** 2026-01-03 00:52:46.277764 | orchestrator | 2026-01-03 00:52:46.277768 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-03 00:52:46.277772 | orchestrator | Saturday 03 January 2026 00:50:56 +0000 (0:00:00.077) 0:00:36.856 ****** 2026-01-03 00:52:46.277776 | orchestrator | 2026-01-03 00:52:46.277780 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-03 00:52:46.277784 | orchestrator | Saturday 03 January 2026 00:50:56 +0000 (0:00:00.065) 0:00:36.922 ****** 2026-01-03 00:52:46.277788 | orchestrator | 2026-01-03 00:52:46.277792 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-03 00:52:46.277796 | orchestrator | Saturday 03 January 2026 00:50:56 +0000 (0:00:00.071) 0:00:36.994 ****** 2026-01-03 00:52:46.277800 | orchestrator | 2026-01-03 00:52:46.277804 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-03 00:52:46.277808 | orchestrator | Saturday 03 January 2026 00:50:56 +0000 (0:00:00.105) 0:00:37.099 ****** 2026-01-03 00:52:46.277812 | orchestrator | 2026-01-03 00:52:46.277816 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-03 00:52:46.277820 | orchestrator | Saturday 03 January 2026 00:50:56 +0000 (0:00:00.123) 0:00:37.223 ****** 2026-01-03 00:52:46.277824 | orchestrator | 2026-01-03 00:52:46.277829 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-01-03 00:52:46.277833 | orchestrator | Saturday 03 January 2026 00:50:56 +0000 (0:00:00.108) 0:00:37.331 ****** 2026-01-03 00:52:46.277837 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:52:46.277841 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:46.277845 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:52:46.277849 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:52:46.277853 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:46.277857 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:46.277860 | orchestrator | 2026-01-03 00:52:46.277865 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-01-03 00:52:46.277869 | orchestrator | Saturday 03 January 2026 00:50:58 +0000 (0:00:01.844) 0:00:39.176 ****** 2026-01-03 00:52:46.277873 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:52:46.277877 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:52:46.277881 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:52:46.277884 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:52:46.277888 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:52:46.277892 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:52:46.277896 | orchestrator | 2026-01-03 00:52:46.277900 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-01-03 00:52:46.277904 | orchestrator | 2026-01-03 00:52:46.277908 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-03 00:52:46.277915 | orchestrator | Saturday 03 January 2026 00:51:27 +0000 (0:00:29.176) 0:01:08.352 ****** 2026-01-03 00:52:46.277919 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:52:46.277923 | orchestrator | 2026-01-03 00:52:46.277927 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-03 00:52:46.277931 | orchestrator | Saturday 03 January 2026 00:51:28 +0000 (0:00:00.604) 0:01:08.956 ****** 2026-01-03 00:52:46.277935 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:52:46.277939 | orchestrator | 2026-01-03 00:52:46.277942 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-01-03 00:52:46.277952 | orchestrator | Saturday 03 January 2026 00:51:28 +0000 (0:00:00.458) 0:01:09.415 ****** 2026-01-03 00:52:46.277956 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:46.277960 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:46.277964 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:46.277968 | orchestrator | 2026-01-03 00:52:46.277972 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-01-03 00:52:46.277976 | orchestrator | Saturday 03 January 2026 00:51:29 +0000 (0:00:01.127) 0:01:10.543 ****** 2026-01-03 00:52:46.277980 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:46.277984 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:46.277987 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:46.277994 | orchestrator | 2026-01-03 00:52:46.278012 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-01-03 00:52:46.278055 | orchestrator | Saturday 03 January 2026 00:51:30 +0000 (0:00:00.464) 0:01:11.008 ****** 2026-01-03 00:52:46.278059 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:46.278064 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:46.278068 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:46.278072 | orchestrator | 2026-01-03 00:52:46.278076 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-01-03 00:52:46.278080 | orchestrator | Saturday 03 January 2026 00:51:30 +0000 (0:00:00.367) 0:01:11.376 ****** 2026-01-03 00:52:46.278084 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:46.278088 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:46.278092 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:46.278096 | orchestrator | 2026-01-03 00:52:46.278100 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-01-03 00:52:46.278104 | orchestrator | Saturday 03 January 2026 00:51:31 +0000 (0:00:00.360) 0:01:11.736 ****** 2026-01-03 00:52:46.278109 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:46.278116 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:46.278122 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:46.278131 | orchestrator | 2026-01-03 00:52:46.278140 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-01-03 00:52:46.278146 | orchestrator | Saturday 03 January 2026 00:51:31 +0000 (0:00:00.655) 0:01:12.392 ****** 2026-01-03 00:52:46.278152 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:46.278160 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:46.278166 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:46.278173 | orchestrator | 2026-01-03 00:52:46.278179 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-01-03 00:52:46.278184 | orchestrator | Saturday 03 January 2026 00:51:31 +0000 (0:00:00.337) 0:01:12.730 ****** 2026-01-03 00:52:46.278191 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:46.278197 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:46.278203 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:46.278210 | orchestrator | 2026-01-03 00:52:46.278216 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-01-03 00:52:46.278223 | orchestrator | Saturday 03 January 2026 00:51:32 +0000 (0:00:00.319) 0:01:13.049 ****** 2026-01-03 00:52:46.278229 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:46.278241 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:46.278248 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:46.278255 | orchestrator | 2026-01-03 00:52:46.278261 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-01-03 00:52:46.278268 | orchestrator | Saturday 03 January 2026 00:51:32 +0000 (0:00:00.350) 0:01:13.400 ****** 2026-01-03 00:52:46.278275 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:46.278281 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:46.278288 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:46.278294 | orchestrator | 2026-01-03 00:52:46.278300 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-01-03 00:52:46.278307 | orchestrator | Saturday 03 January 2026 00:51:33 +0000 (0:00:00.505) 0:01:13.906 ****** 2026-01-03 00:52:46.278313 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:46.278320 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:46.278326 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:46.278330 | orchestrator | 2026-01-03 00:52:46.278334 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-01-03 00:52:46.278338 | orchestrator | Saturday 03 January 2026 00:51:33 +0000 (0:00:00.287) 0:01:14.194 ****** 2026-01-03 00:52:46.278342 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:46.278347 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:46.278350 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:46.278354 | orchestrator | 2026-01-03 00:52:46.278359 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-01-03 00:52:46.278364 | orchestrator | Saturday 03 January 2026 00:51:33 +0000 (0:00:00.313) 0:01:14.507 ****** 2026-01-03 00:52:46.278368 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:46.278372 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:46.278376 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:46.278380 | orchestrator | 2026-01-03 00:52:46.278384 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-01-03 00:52:46.278388 | orchestrator | Saturday 03 January 2026 00:51:34 +0000 (0:00:00.308) 0:01:14.816 ****** 2026-01-03 00:52:46.278392 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:46.278396 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:46.278400 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:46.278404 | orchestrator | 2026-01-03 00:52:46.278408 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-01-03 00:52:46.278412 | orchestrator | Saturday 03 January 2026 00:51:34 +0000 (0:00:00.309) 0:01:15.125 ****** 2026-01-03 00:52:46.278416 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:46.278420 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:46.278424 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:46.278428 | orchestrator | 2026-01-03 00:52:46.278432 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-01-03 00:52:46.278436 | orchestrator | Saturday 03 January 2026 00:51:34 +0000 (0:00:00.486) 0:01:15.611 ****** 2026-01-03 00:52:46.278440 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:46.278444 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:46.278448 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:46.278452 | orchestrator | 2026-01-03 00:52:46.278456 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-01-03 00:52:46.278460 | orchestrator | Saturday 03 January 2026 00:51:35 +0000 (0:00:00.313) 0:01:15.924 ****** 2026-01-03 00:52:46.278468 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:46.278472 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:46.278477 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:46.278481 | orchestrator | 2026-01-03 00:52:46.278485 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-01-03 00:52:46.278489 | orchestrator | Saturday 03 January 2026 00:51:35 +0000 (0:00:00.311) 0:01:16.236 ****** 2026-01-03 00:52:46.278493 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:46.278497 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:46.278510 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:46.278514 | orchestrator | 2026-01-03 00:52:46.278518 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-03 00:52:46.278522 | orchestrator | Saturday 03 January 2026 00:51:35 +0000 (0:00:00.315) 0:01:16.552 ****** 2026-01-03 00:52:46.278527 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:52:46.278531 | orchestrator | 2026-01-03 00:52:46.278535 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-01-03 00:52:46.278539 | orchestrator | Saturday 03 January 2026 00:51:36 +0000 (0:00:00.785) 0:01:17.337 ****** 2026-01-03 00:52:46.278543 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:46.278547 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:46.278551 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:46.278555 | orchestrator | 2026-01-03 00:52:46.278559 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-01-03 00:52:46.278563 | orchestrator | Saturday 03 January 2026 00:51:37 +0000 (0:00:00.469) 0:01:17.806 ****** 2026-01-03 00:52:46.278567 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:46.278571 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:46.278575 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:46.278579 | orchestrator | 2026-01-03 00:52:46.278583 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-01-03 00:52:46.278588 | orchestrator | Saturday 03 January 2026 00:51:37 +0000 (0:00:00.425) 0:01:18.232 ****** 2026-01-03 00:52:46.278592 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:46.278596 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:46.278600 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:46.278604 | orchestrator | 2026-01-03 00:52:46.278608 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-01-03 00:52:46.278612 | orchestrator | Saturday 03 January 2026 00:51:38 +0000 (0:00:00.529) 0:01:18.762 ****** 2026-01-03 00:52:46.278616 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:46.278620 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:46.278624 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:46.278628 | orchestrator | 2026-01-03 00:52:46.278632 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-01-03 00:52:46.278636 | orchestrator | Saturday 03 January 2026 00:51:38 +0000 (0:00:00.334) 0:01:19.097 ****** 2026-01-03 00:52:46.278640 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:46.278644 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:46.278648 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:46.278652 | orchestrator | 2026-01-03 00:52:46.278656 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-01-03 00:52:46.278660 | orchestrator | Saturday 03 January 2026 00:51:38 +0000 (0:00:00.321) 0:01:19.418 ****** 2026-01-03 00:52:46.278664 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:46.278668 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:46.278672 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:46.278676 | orchestrator | 2026-01-03 00:52:46.278680 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-01-03 00:52:46.278684 | orchestrator | Saturday 03 January 2026 00:51:39 +0000 (0:00:00.381) 0:01:19.800 ****** 2026-01-03 00:52:46.278688 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:46.278692 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:46.278696 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:46.278700 | orchestrator | 2026-01-03 00:52:46.278704 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-01-03 00:52:46.278708 | orchestrator | Saturday 03 January 2026 00:51:39 +0000 (0:00:00.575) 0:01:20.376 ****** 2026-01-03 00:52:46.278712 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:46.278716 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:46.278721 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:46.278728 | orchestrator | 2026-01-03 00:52:46.278732 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-03 00:52:46.278736 | orchestrator | Saturday 03 January 2026 00:51:39 +0000 (0:00:00.347) 0:01:20.723 ****** 2026-01-03 00:52:46.278741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278789 | orchestrator | 2026-01-03 00:52:46.278794 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-03 00:52:46.278801 | orchestrator | Saturday 03 January 2026 00:51:41 +0000 (0:00:01.807) 0:01:22.530 ****** 2026-01-03 00:52:46.278805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278848 | orchestrator | 2026-01-03 00:52:46.278852 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-03 00:52:46.278860 | orchestrator | Saturday 03 January 2026 00:51:46 +0000 (0:00:04.276) 0:01:26.806 ****** 2026-01-03 00:52:46.278864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.278926 | orchestrator | 2026-01-03 00:52:46.278932 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-03 00:52:46.278938 | orchestrator | Saturday 03 January 2026 00:51:48 +0000 (0:00:02.752) 0:01:29.559 ****** 2026-01-03 00:52:46.278949 | orchestrator | 2026-01-03 00:52:46.278955 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-03 00:52:46.278961 | orchestrator | Saturday 03 January 2026 00:51:48 +0000 (0:00:00.069) 0:01:29.629 ****** 2026-01-03 00:52:46.278967 | orchestrator | 2026-01-03 00:52:46.278973 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-03 00:52:46.278980 | orchestrator | Saturday 03 January 2026 00:51:48 +0000 (0:00:00.067) 0:01:29.697 ****** 2026-01-03 00:52:46.278986 | orchestrator | 2026-01-03 00:52:46.278992 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-03 00:52:46.279027 | orchestrator | Saturday 03 January 2026 00:51:49 +0000 (0:00:00.095) 0:01:29.792 ****** 2026-01-03 00:52:46.279034 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:52:46.279040 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:52:46.279047 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:52:46.279053 | orchestrator | 2026-01-03 00:52:46.279059 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-03 00:52:46.279065 | orchestrator | Saturday 03 January 2026 00:51:53 +0000 (0:00:04.180) 0:01:33.973 ****** 2026-01-03 00:52:46.279072 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:52:46.279078 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:52:46.279085 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:52:46.279092 | orchestrator | 2026-01-03 00:52:46.279099 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-03 00:52:46.279106 | orchestrator | Saturday 03 January 2026 00:52:02 +0000 (0:00:08.896) 0:01:42.869 ****** 2026-01-03 00:52:46.279112 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:52:46.279118 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:52:46.279124 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:52:46.279158 | orchestrator | 2026-01-03 00:52:46.279168 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-03 00:52:46.279175 | orchestrator | Saturday 03 January 2026 00:52:04 +0000 (0:00:02.586) 0:01:45.455 ****** 2026-01-03 00:52:46.279182 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:46.279189 | orchestrator | 2026-01-03 00:52:46.279196 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-03 00:52:46.279203 | orchestrator | Saturday 03 January 2026 00:52:05 +0000 (0:00:00.337) 0:01:45.793 ****** 2026-01-03 00:52:46.279209 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:46.279216 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:46.279222 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:46.279228 | orchestrator | 2026-01-03 00:52:46.279235 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-03 00:52:46.279241 | orchestrator | Saturday 03 January 2026 00:52:05 +0000 (0:00:00.763) 0:01:46.557 ****** 2026-01-03 00:52:46.279248 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:46.279255 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:46.279262 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:52:46.279268 | orchestrator | 2026-01-03 00:52:46.279275 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-03 00:52:46.279282 | orchestrator | Saturday 03 January 2026 00:52:06 +0000 (0:00:00.630) 0:01:47.188 ****** 2026-01-03 00:52:46.279288 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:46.279294 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:46.279301 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:46.279308 | orchestrator | 2026-01-03 00:52:46.279315 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-03 00:52:46.279328 | orchestrator | Saturday 03 January 2026 00:52:07 +0000 (0:00:00.746) 0:01:47.934 ****** 2026-01-03 00:52:46.279335 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:46.279342 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:46.279348 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:52:46.279354 | orchestrator | 2026-01-03 00:52:46.279361 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-03 00:52:46.279374 | orchestrator | Saturday 03 January 2026 00:52:07 +0000 (0:00:00.550) 0:01:48.485 ****** 2026-01-03 00:52:46.279378 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:46.279382 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:46.279392 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:46.279397 | orchestrator | 2026-01-03 00:52:46.279401 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-03 00:52:46.279405 | orchestrator | Saturday 03 January 2026 00:52:08 +0000 (0:00:01.039) 0:01:49.525 ****** 2026-01-03 00:52:46.279409 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:46.279413 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:46.279416 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:46.279420 | orchestrator | 2026-01-03 00:52:46.279424 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-01-03 00:52:46.279429 | orchestrator | Saturday 03 January 2026 00:52:09 +0000 (0:00:00.905) 0:01:50.430 ****** 2026-01-03 00:52:46.279433 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:46.279437 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:46.279441 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:46.279445 | orchestrator | 2026-01-03 00:52:46.279450 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-03 00:52:46.279457 | orchestrator | Saturday 03 January 2026 00:52:10 +0000 (0:00:00.314) 0:01:50.745 ****** 2026-01-03 00:52:46.279464 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279470 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279476 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279483 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279490 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279496 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279504 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279524 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279536 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279542 | orchestrator | 2026-01-03 00:52:46.279549 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-03 00:52:46.279556 | orchestrator | Saturday 03 January 2026 00:52:11 +0000 (0:00:01.606) 0:01:52.351 ****** 2026-01-03 00:52:46.279563 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279569 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279576 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279584 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279597 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279620 | orchestrator | 2026-01-03 00:52:46.279624 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-03 00:52:46.279628 | orchestrator | Saturday 03 January 2026 00:52:16 +0000 (0:00:04.509) 0:01:56.861 ****** 2026-01-03 00:52:46.279636 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279641 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279645 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279649 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279662 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 00:52:46.279679 | orchestrator | 2026-01-03 00:52:46.279683 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-03 00:52:46.279687 | orchestrator | Saturday 03 January 2026 00:52:19 +0000 (0:00:03.148) 0:02:00.009 ****** 2026-01-03 00:52:46.279692 | orchestrator | 2026-01-03 00:52:46.279696 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-03 00:52:46.279700 | orchestrator | Saturday 03 January 2026 00:52:19 +0000 (0:00:00.065) 0:02:00.074 ****** 2026-01-03 00:52:46.279704 | orchestrator | 2026-01-03 00:52:46.279708 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-03 00:52:46.279712 | orchestrator | Saturday 03 January 2026 00:52:19 +0000 (0:00:00.076) 0:02:00.150 ****** 2026-01-03 00:52:46.279716 | orchestrator | 2026-01-03 00:52:46.279720 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-03 00:52:46.279724 | orchestrator | Saturday 03 January 2026 00:52:19 +0000 (0:00:00.065) 0:02:00.215 ****** 2026-01-03 00:52:46.279728 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:52:46.279733 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:52:46.279737 | orchestrator | 2026-01-03 00:52:46.279743 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-03 00:52:46.279748 | orchestrator | Saturday 03 January 2026 00:52:25 +0000 (0:00:06.248) 0:02:06.464 ****** 2026-01-03 00:52:46.279752 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:52:46.279756 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:52:46.279760 | orchestrator | 2026-01-03 00:52:46.279764 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-03 00:52:46.279768 | orchestrator | Saturday 03 January 2026 00:52:31 +0000 (0:00:06.258) 0:02:12.723 ****** 2026-01-03 00:52:46.279772 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:52:46.279797 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:52:46.279801 | orchestrator | 2026-01-03 00:52:46.279805 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-03 00:52:46.279809 | orchestrator | Saturday 03 January 2026 00:52:38 +0000 (0:00:06.382) 0:02:19.106 ****** 2026-01-03 00:52:46.279813 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:52:46.279817 | orchestrator | 2026-01-03 00:52:46.279821 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-03 00:52:46.279825 | orchestrator | Saturday 03 January 2026 00:52:38 +0000 (0:00:00.154) 0:02:19.261 ****** 2026-01-03 00:52:46.279829 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:46.279833 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:46.279837 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:46.279841 | orchestrator | 2026-01-03 00:52:46.279845 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-03 00:52:46.279850 | orchestrator | Saturday 03 January 2026 00:52:39 +0000 (0:00:00.686) 0:02:19.948 ****** 2026-01-03 00:52:46.279854 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:46.279858 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:46.279861 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:52:46.279866 | orchestrator | 2026-01-03 00:52:46.279873 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-03 00:52:46.279877 | orchestrator | Saturday 03 January 2026 00:52:39 +0000 (0:00:00.540) 0:02:20.488 ****** 2026-01-03 00:52:46.279881 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:46.279885 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:46.279889 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:46.279893 | orchestrator | 2026-01-03 00:52:46.279898 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-03 00:52:46.279902 | orchestrator | Saturday 03 January 2026 00:52:40 +0000 (0:00:00.739) 0:02:21.228 ****** 2026-01-03 00:52:46.279906 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:52:46.279910 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:52:46.279914 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:52:46.279918 | orchestrator | 2026-01-03 00:52:46.279924 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-03 00:52:46.279931 | orchestrator | Saturday 03 January 2026 00:52:41 +0000 (0:00:00.675) 0:02:21.903 ****** 2026-01-03 00:52:46.279937 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:46.279944 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:46.279950 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:46.279956 | orchestrator | 2026-01-03 00:52:46.279962 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-03 00:52:46.279969 | orchestrator | Saturday 03 January 2026 00:52:41 +0000 (0:00:00.736) 0:02:22.640 ****** 2026-01-03 00:52:46.279976 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:52:46.279983 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:52:46.279990 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:52:46.280066 | orchestrator | 2026-01-03 00:52:46.280076 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:52:46.280083 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-03 00:52:46.280091 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-03 00:52:46.280097 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-03 00:52:46.280104 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:52:46.280111 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:52:46.280118 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 00:52:46.280126 | orchestrator | 2026-01-03 00:52:46.280132 | orchestrator | 2026-01-03 00:52:46.280139 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:52:46.280145 | orchestrator | Saturday 03 January 2026 00:52:42 +0000 (0:00:00.905) 0:02:23.545 ****** 2026-01-03 00:52:46.280152 | orchestrator | =============================================================================== 2026-01-03 00:52:46.280158 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 29.18s 2026-01-03 00:52:46.280164 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.45s 2026-01-03 00:52:46.280175 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 15.16s 2026-01-03 00:52:46.280181 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 10.43s 2026-01-03 00:52:46.280186 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.97s 2026-01-03 00:52:46.280193 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.51s 2026-01-03 00:52:46.280199 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.28s 2026-01-03 00:52:46.280219 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.19s 2026-01-03 00:52:46.280225 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.15s 2026-01-03 00:52:46.280230 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.75s 2026-01-03 00:52:46.280236 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.98s 2026-01-03 00:52:46.280241 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.93s 2026-01-03 00:52:46.280248 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.84s 2026-01-03 00:52:46.280255 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.81s 2026-01-03 00:52:46.280262 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.61s 2026-01-03 00:52:46.280268 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.60s 2026-01-03 00:52:46.280275 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.49s 2026-01-03 00:52:46.280281 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.44s 2026-01-03 00:52:46.280289 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.36s 2026-01-03 00:52:46.280295 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.30s 2026-01-03 00:52:46.280302 | orchestrator | 2026-01-03 00:52:46 | INFO  | Task 574b178b-5b68-42b2-b21d-e51cfb918523 is in state SUCCESS 2026-01-03 00:52:46.280310 | orchestrator | 2026-01-03 00:52:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:49.319527 | orchestrator | 2026-01-03 00:52:49 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:52:49.322230 | orchestrator | 2026-01-03 00:52:49 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:52:49.322283 | orchestrator | 2026-01-03 00:52:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:52.365493 | orchestrator | 2026-01-03 00:52:52 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:52:52.367623 | orchestrator | 2026-01-03 00:52:52 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:52:52.367675 | orchestrator | 2026-01-03 00:52:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:55.409172 | orchestrator | 2026-01-03 00:52:55 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:52:55.409220 | orchestrator | 2026-01-03 00:52:55 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:52:55.409226 | orchestrator | 2026-01-03 00:52:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:52:58.455448 | orchestrator | 2026-01-03 00:52:58 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:52:58.455906 | orchestrator | 2026-01-03 00:52:58 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:52:58.456830 | orchestrator | 2026-01-03 00:52:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:01.507896 | orchestrator | 2026-01-03 00:53:01 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:53:01.509500 | orchestrator | 2026-01-03 00:53:01 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:53:01.509551 | orchestrator | 2026-01-03 00:53:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:04.552184 | orchestrator | 2026-01-03 00:53:04 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:53:04.553889 | orchestrator | 2026-01-03 00:53:04 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:53:04.553953 | orchestrator | 2026-01-03 00:53:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:07.604567 | orchestrator | 2026-01-03 00:53:07 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:53:07.606568 | orchestrator | 2026-01-03 00:53:07 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:53:07.607664 | orchestrator | 2026-01-03 00:53:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:10.649270 | orchestrator | 2026-01-03 00:53:10 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:53:10.649436 | orchestrator | 2026-01-03 00:53:10 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:53:10.649920 | orchestrator | 2026-01-03 00:53:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:13.701261 | orchestrator | 2026-01-03 00:53:13 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:53:13.703656 | orchestrator | 2026-01-03 00:53:13 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:53:13.703727 | orchestrator | 2026-01-03 00:53:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:16.751004 | orchestrator | 2026-01-03 00:53:16 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:53:16.752186 | orchestrator | 2026-01-03 00:53:16 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:53:16.752238 | orchestrator | 2026-01-03 00:53:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:19.793390 | orchestrator | 2026-01-03 00:53:19 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:53:19.795275 | orchestrator | 2026-01-03 00:53:19 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:53:19.796200 | orchestrator | 2026-01-03 00:53:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:22.842050 | orchestrator | 2026-01-03 00:53:22 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:53:22.844323 | orchestrator | 2026-01-03 00:53:22 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:53:22.844381 | orchestrator | 2026-01-03 00:53:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:25.885038 | orchestrator | 2026-01-03 00:53:25 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:53:25.886810 | orchestrator | 2026-01-03 00:53:25 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:53:25.886886 | orchestrator | 2026-01-03 00:53:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:28.938392 | orchestrator | 2026-01-03 00:53:28 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:53:28.939176 | orchestrator | 2026-01-03 00:53:28 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:53:28.939214 | orchestrator | 2026-01-03 00:53:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:31.986662 | orchestrator | 2026-01-03 00:53:31 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:53:31.989545 | orchestrator | 2026-01-03 00:53:31 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:53:31.989627 | orchestrator | 2026-01-03 00:53:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:35.038394 | orchestrator | 2026-01-03 00:53:35 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:53:35.040476 | orchestrator | 2026-01-03 00:53:35 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:53:35.040711 | orchestrator | 2026-01-03 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:38.090645 | orchestrator | 2026-01-03 00:53:38 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:53:38.091427 | orchestrator | 2026-01-03 00:53:38 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:53:38.091662 | orchestrator | 2026-01-03 00:53:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:41.127307 | orchestrator | 2026-01-03 00:53:41 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:53:41.127386 | orchestrator | 2026-01-03 00:53:41 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:53:41.127396 | orchestrator | 2026-01-03 00:53:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:44.170945 | orchestrator | 2026-01-03 00:53:44 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:53:44.172828 | orchestrator | 2026-01-03 00:53:44 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:53:44.172910 | orchestrator | 2026-01-03 00:53:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:47.229003 | orchestrator | 2026-01-03 00:53:47 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:53:47.229990 | orchestrator | 2026-01-03 00:53:47 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:53:47.230071 | orchestrator | 2026-01-03 00:53:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:50.287128 | orchestrator | 2026-01-03 00:53:50 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:53:50.287985 | orchestrator | 2026-01-03 00:53:50 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:53:50.288286 | orchestrator | 2026-01-03 00:53:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:53.344911 | orchestrator | 2026-01-03 00:53:53 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:53:53.347032 | orchestrator | 2026-01-03 00:53:53 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:53:53.347085 | orchestrator | 2026-01-03 00:53:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:56.393075 | orchestrator | 2026-01-03 00:53:56 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:53:56.394047 | orchestrator | 2026-01-03 00:53:56 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:53:56.394071 | orchestrator | 2026-01-03 00:53:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:53:59.445400 | orchestrator | 2026-01-03 00:53:59 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:53:59.445928 | orchestrator | 2026-01-03 00:53:59 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:53:59.445956 | orchestrator | 2026-01-03 00:53:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:02.490988 | orchestrator | 2026-01-03 00:54:02 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:54:02.491074 | orchestrator | 2026-01-03 00:54:02 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:54:02.491084 | orchestrator | 2026-01-03 00:54:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:05.540563 | orchestrator | 2026-01-03 00:54:05 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:54:05.542707 | orchestrator | 2026-01-03 00:54:05 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:54:05.542880 | orchestrator | 2026-01-03 00:54:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:08.585122 | orchestrator | 2026-01-03 00:54:08 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:54:08.587295 | orchestrator | 2026-01-03 00:54:08 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:54:08.587340 | orchestrator | 2026-01-03 00:54:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:11.636703 | orchestrator | 2026-01-03 00:54:11 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:54:11.638774 | orchestrator | 2026-01-03 00:54:11 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:54:11.639163 | orchestrator | 2026-01-03 00:54:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:14.691906 | orchestrator | 2026-01-03 00:54:14 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:54:14.693117 | orchestrator | 2026-01-03 00:54:14 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:54:14.693155 | orchestrator | 2026-01-03 00:54:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:17.743848 | orchestrator | 2026-01-03 00:54:17 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:54:17.745508 | orchestrator | 2026-01-03 00:54:17 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:54:17.745632 | orchestrator | 2026-01-03 00:54:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:20.792397 | orchestrator | 2026-01-03 00:54:20 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:54:20.794112 | orchestrator | 2026-01-03 00:54:20 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:54:20.794155 | orchestrator | 2026-01-03 00:54:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:23.839541 | orchestrator | 2026-01-03 00:54:23 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:54:23.840741 | orchestrator | 2026-01-03 00:54:23 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:54:23.840884 | orchestrator | 2026-01-03 00:54:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:26.897815 | orchestrator | 2026-01-03 00:54:26 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:54:26.900343 | orchestrator | 2026-01-03 00:54:26 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:54:26.900412 | orchestrator | 2026-01-03 00:54:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:29.942509 | orchestrator | 2026-01-03 00:54:29 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:54:29.944512 | orchestrator | 2026-01-03 00:54:29 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:54:29.944587 | orchestrator | 2026-01-03 00:54:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:33.003322 | orchestrator | 2026-01-03 00:54:33 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:54:33.003428 | orchestrator | 2026-01-03 00:54:33 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:54:33.003475 | orchestrator | 2026-01-03 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:36.046878 | orchestrator | 2026-01-03 00:54:36 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:54:36.048028 | orchestrator | 2026-01-03 00:54:36 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:54:36.048069 | orchestrator | 2026-01-03 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:39.096695 | orchestrator | 2026-01-03 00:54:39 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:54:39.098997 | orchestrator | 2026-01-03 00:54:39 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:54:39.100377 | orchestrator | 2026-01-03 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:42.179098 | orchestrator | 2026-01-03 00:54:42 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:54:42.179797 | orchestrator | 2026-01-03 00:54:42 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:54:42.179832 | orchestrator | 2026-01-03 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:45.213948 | orchestrator | 2026-01-03 00:54:45 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:54:45.214960 | orchestrator | 2026-01-03 00:54:45 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:54:45.214988 | orchestrator | 2026-01-03 00:54:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:48.255179 | orchestrator | 2026-01-03 00:54:48 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:54:48.255361 | orchestrator | 2026-01-03 00:54:48 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:54:48.255375 | orchestrator | 2026-01-03 00:54:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:51.297871 | orchestrator | 2026-01-03 00:54:51 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:54:51.299347 | orchestrator | 2026-01-03 00:54:51 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:54:51.299807 | orchestrator | 2026-01-03 00:54:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:54.355554 | orchestrator | 2026-01-03 00:54:54 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:54:54.355765 | orchestrator | 2026-01-03 00:54:54 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:54:54.355783 | orchestrator | 2026-01-03 00:54:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:54:57.400032 | orchestrator | 2026-01-03 00:54:57 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:54:57.403246 | orchestrator | 2026-01-03 00:54:57 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:54:57.403291 | orchestrator | 2026-01-03 00:54:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:00.442587 | orchestrator | 2026-01-03 00:55:00 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:55:00.445559 | orchestrator | 2026-01-03 00:55:00 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:55:00.445598 | orchestrator | 2026-01-03 00:55:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:03.484329 | orchestrator | 2026-01-03 00:55:03 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:55:03.485928 | orchestrator | 2026-01-03 00:55:03 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:55:03.486627 | orchestrator | 2026-01-03 00:55:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:06.545860 | orchestrator | 2026-01-03 00:55:06 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:55:06.547192 | orchestrator | 2026-01-03 00:55:06 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:55:06.548095 | orchestrator | 2026-01-03 00:55:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:09.594743 | orchestrator | 2026-01-03 00:55:09 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:55:09.595211 | orchestrator | 2026-01-03 00:55:09 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:55:09.596727 | orchestrator | 2026-01-03 00:55:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:12.640488 | orchestrator | 2026-01-03 00:55:12 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:55:12.642172 | orchestrator | 2026-01-03 00:55:12 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:55:12.642307 | orchestrator | 2026-01-03 00:55:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:15.684117 | orchestrator | 2026-01-03 00:55:15 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:55:15.685628 | orchestrator | 2026-01-03 00:55:15 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:55:15.685667 | orchestrator | 2026-01-03 00:55:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:18.732998 | orchestrator | 2026-01-03 00:55:18 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:55:18.736151 | orchestrator | 2026-01-03 00:55:18 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:55:18.736212 | orchestrator | 2026-01-03 00:55:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:21.773017 | orchestrator | 2026-01-03 00:55:21 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:55:21.774752 | orchestrator | 2026-01-03 00:55:21 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:55:21.774806 | orchestrator | 2026-01-03 00:55:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:24.832296 | orchestrator | 2026-01-03 00:55:24 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:55:24.832395 | orchestrator | 2026-01-03 00:55:24 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:55:24.832405 | orchestrator | 2026-01-03 00:55:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:27.874069 | orchestrator | 2026-01-03 00:55:27 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:55:27.876049 | orchestrator | 2026-01-03 00:55:27 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:55:27.876119 | orchestrator | 2026-01-03 00:55:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:30.931345 | orchestrator | 2026-01-03 00:55:30 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:55:30.933123 | orchestrator | 2026-01-03 00:55:30 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:55:30.933619 | orchestrator | 2026-01-03 00:55:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:33.980093 | orchestrator | 2026-01-03 00:55:33 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:55:33.981087 | orchestrator | 2026-01-03 00:55:33 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:55:33.981121 | orchestrator | 2026-01-03 00:55:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:37.022696 | orchestrator | 2026-01-03 00:55:37 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:55:37.022775 | orchestrator | 2026-01-03 00:55:37 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:55:37.022784 | orchestrator | 2026-01-03 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:40.071281 | orchestrator | 2026-01-03 00:55:40 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state STARTED 2026-01-03 00:55:40.072999 | orchestrator | 2026-01-03 00:55:40 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:55:40.073074 | orchestrator | 2026-01-03 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:43.129109 | orchestrator | 2026-01-03 00:55:43 | INFO  | Task bb4e8895-c456-4230-90e3-c365a8bc769c is in state SUCCESS 2026-01-03 00:55:43.130993 | orchestrator | 2026-01-03 00:55:43.131064 | orchestrator | 2026-01-03 00:55:43.131071 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:55:43.131077 | orchestrator | 2026-01-03 00:55:43.131082 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 00:55:43.131089 | orchestrator | Saturday 03 January 2026 00:49:07 +0000 (0:00:00.409) 0:00:00.409 ****** 2026-01-03 00:55:43.131095 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:55:43.131103 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:55:43.131109 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:55:43.131115 | orchestrator | 2026-01-03 00:55:43.131121 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 00:55:43.131127 | orchestrator | Saturday 03 January 2026 00:49:07 +0000 (0:00:00.583) 0:00:00.993 ****** 2026-01-03 00:55:43.131134 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-01-03 00:55:43.131141 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-01-03 00:55:43.131146 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-01-03 00:55:43.131152 | orchestrator | 2026-01-03 00:55:43.131159 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-01-03 00:55:43.131165 | orchestrator | 2026-01-03 00:55:43.131171 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-03 00:55:43.131178 | orchestrator | Saturday 03 January 2026 00:49:08 +0000 (0:00:00.908) 0:00:01.902 ****** 2026-01-03 00:55:43.131185 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.131191 | orchestrator | 2026-01-03 00:55:43.131195 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-01-03 00:55:43.131199 | orchestrator | Saturday 03 January 2026 00:49:09 +0000 (0:00:00.832) 0:00:02.734 ****** 2026-01-03 00:55:43.131203 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:55:43.131207 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:55:43.131211 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:55:43.131215 | orchestrator | 2026-01-03 00:55:43.131219 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-03 00:55:43.131223 | orchestrator | Saturday 03 January 2026 00:49:11 +0000 (0:00:01.803) 0:00:04.538 ****** 2026-01-03 00:55:43.131227 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.131231 | orchestrator | 2026-01-03 00:55:43.131235 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-01-03 00:55:43.131239 | orchestrator | Saturday 03 January 2026 00:49:12 +0000 (0:00:00.947) 0:00:05.485 ****** 2026-01-03 00:55:43.131260 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:55:43.131264 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:55:43.131268 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:55:43.131272 | orchestrator | 2026-01-03 00:55:43.131276 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-01-03 00:55:43.131280 | orchestrator | Saturday 03 January 2026 00:49:13 +0000 (0:00:00.896) 0:00:06.382 ****** 2026-01-03 00:55:43.131284 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-03 00:55:43.131288 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-03 00:55:43.131292 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-03 00:55:43.131296 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-03 00:55:43.131300 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-03 00:55:43.131365 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-03 00:55:43.131371 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-03 00:55:43.131376 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-03 00:55:43.131380 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-03 00:55:43.131384 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-03 00:55:43.131387 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-03 00:55:43.131391 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-03 00:55:43.131395 | orchestrator | 2026-01-03 00:55:43.131399 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-03 00:55:43.131403 | orchestrator | Saturday 03 January 2026 00:49:16 +0000 (0:00:03.586) 0:00:09.969 ****** 2026-01-03 00:55:43.131407 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-03 00:55:43.131411 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-03 00:55:43.131415 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-03 00:55:43.131419 | orchestrator | 2026-01-03 00:55:43.131423 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-03 00:55:43.131427 | orchestrator | Saturday 03 January 2026 00:49:17 +0000 (0:00:00.864) 0:00:10.835 ****** 2026-01-03 00:55:43.131431 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-03 00:55:43.131445 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-03 00:55:43.131449 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-03 00:55:43.131453 | orchestrator | 2026-01-03 00:55:43.131457 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-03 00:55:43.131461 | orchestrator | Saturday 03 January 2026 00:49:19 +0000 (0:00:01.687) 0:00:12.522 ****** 2026-01-03 00:55:43.131465 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-01-03 00:55:43.131469 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.131525 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-01-03 00:55:43.131530 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.131534 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-01-03 00:55:43.131538 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.131542 | orchestrator | 2026-01-03 00:55:43.131546 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-01-03 00:55:43.131549 | orchestrator | Saturday 03 January 2026 00:49:20 +0000 (0:00:01.235) 0:00:13.758 ****** 2026-01-03 00:55:43.131556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-03 00:55:43.131570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-03 00:55:43.131575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-03 00:55:43.131580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:55:43.131586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:55:43.131597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:55:43.131603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:55:43.131613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:55:43.131644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:55:43.131650 | orchestrator | 2026-01-03 00:55:43.131654 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-01-03 00:55:43.131659 | orchestrator | Saturday 03 January 2026 00:49:23 +0000 (0:00:03.285) 0:00:17.044 ****** 2026-01-03 00:55:43.131664 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.131668 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.131672 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.131677 | orchestrator | 2026-01-03 00:55:43.131681 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-01-03 00:55:43.131686 | orchestrator | Saturday 03 January 2026 00:49:25 +0000 (0:00:01.742) 0:00:18.787 ****** 2026-01-03 00:55:43.131690 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-01-03 00:55:43.131694 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-01-03 00:55:43.131699 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-01-03 00:55:43.131703 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-01-03 00:55:43.131707 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-01-03 00:55:43.131711 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-01-03 00:55:43.131716 | orchestrator | 2026-01-03 00:55:43.131720 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-01-03 00:55:43.131724 | orchestrator | Saturday 03 January 2026 00:49:28 +0000 (0:00:02.624) 0:00:21.411 ****** 2026-01-03 00:55:43.131729 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.131733 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.131738 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.131742 | orchestrator | 2026-01-03 00:55:43.131747 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-01-03 00:55:43.131751 | orchestrator | Saturday 03 January 2026 00:49:29 +0000 (0:00:01.121) 0:00:22.533 ****** 2026-01-03 00:55:43.131756 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:55:43.131760 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:55:43.131764 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:55:43.131769 | orchestrator | 2026-01-03 00:55:43.131773 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-01-03 00:55:43.131777 | orchestrator | Saturday 03 January 2026 00:49:31 +0000 (0:00:02.667) 0:00:25.201 ****** 2026-01-03 00:55:43.131782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.131799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.131804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.131810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7e31082455b19ac23cc167c2044a132cd611e95b', '__omit_place_holder__7e31082455b19ac23cc167c2044a132cd611e95b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-03 00:55:43.131814 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.131819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.131824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.131828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.131836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7e31082455b19ac23cc167c2044a132cd611e95b', '__omit_place_holder__7e31082455b19ac23cc167c2044a132cd611e95b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-03 00:55:43.131847 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.131856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.131861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.131866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.131871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7e31082455b19ac23cc167c2044a132cd611e95b', '__omit_place_holder__7e31082455b19ac23cc167c2044a132cd611e95b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-03 00:55:43.131875 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.131880 | orchestrator | 2026-01-03 00:55:43.131884 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-01-03 00:55:43.131888 | orchestrator | Saturday 03 January 2026 00:49:33 +0000 (0:00:01.062) 0:00:26.264 ****** 2026-01-03 00:55:43.131893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-03 00:55:43.131903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-03 00:55:43.131913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-03 00:55:43.131918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:55:43.131922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.131927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7e31082455b19ac23cc167c2044a132cd611e95b', '__omit_place_holder__7e31082455b19ac23cc167c2044a132cd611e95b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-03 00:55:43.131932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:55:43.131947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.131955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7e31082455b19ac23cc167c2044a132cd611e95b', '__omit_place_holder__7e31082455b19ac23cc167c2044a132cd611e95b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-03 00:55:43.132010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:55:43.132015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.132019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7e31082455b19ac23cc167c2044a132cd611e95b', '__omit_place_holder__7e31082455b19ac23cc167c2044a132cd611e95b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-03 00:55:43.132023 | orchestrator | 2026-01-03 00:55:43.132027 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-01-03 00:55:43.132031 | orchestrator | Saturday 03 January 2026 00:49:36 +0000 (0:00:03.171) 0:00:29.435 ****** 2026-01-03 00:55:43.132055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-03 00:55:43.132064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-03 00:55:43.132075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-03 00:55:43.132088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:55:43.132094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:55:43.132100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:55:43.132107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:55:43.132113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:55:43.132125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:55:43.132131 | orchestrator | 2026-01-03 00:55:43.132137 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-01-03 00:55:43.132144 | orchestrator | Saturday 03 January 2026 00:49:39 +0000 (0:00:03.793) 0:00:33.229 ****** 2026-01-03 00:55:43.132151 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-03 00:55:43.132180 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-03 00:55:43.132188 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-03 00:55:43.132194 | orchestrator | 2026-01-03 00:55:43.132201 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-01-03 00:55:43.132211 | orchestrator | Saturday 03 January 2026 00:49:43 +0000 (0:00:03.689) 0:00:36.919 ****** 2026-01-03 00:55:43.132217 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-03 00:55:43.132224 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-03 00:55:43.132270 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-03 00:55:43.132277 | orchestrator | 2026-01-03 00:55:43.132691 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-01-03 00:55:43.132713 | orchestrator | Saturday 03 January 2026 00:49:47 +0000 (0:00:04.020) 0:00:40.940 ****** 2026-01-03 00:55:43.132719 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.132727 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.132732 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.132739 | orchestrator | 2026-01-03 00:55:43.132745 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-01-03 00:55:43.132751 | orchestrator | Saturday 03 January 2026 00:49:48 +0000 (0:00:00.542) 0:00:41.482 ****** 2026-01-03 00:55:43.132757 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-03 00:55:43.132765 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-03 00:55:43.132770 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-03 00:55:43.132776 | orchestrator | 2026-01-03 00:55:43.132782 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-01-03 00:55:43.132788 | orchestrator | Saturday 03 January 2026 00:49:50 +0000 (0:00:02.282) 0:00:43.765 ****** 2026-01-03 00:55:43.132795 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-03 00:55:43.132801 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-03 00:55:43.132807 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-03 00:55:43.132813 | orchestrator | 2026-01-03 00:55:43.132829 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-01-03 00:55:43.132895 | orchestrator | Saturday 03 January 2026 00:49:53 +0000 (0:00:02.711) 0:00:46.476 ****** 2026-01-03 00:55:43.132900 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-01-03 00:55:43.132905 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-01-03 00:55:43.132909 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-01-03 00:55:43.132913 | orchestrator | 2026-01-03 00:55:43.132917 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-01-03 00:55:43.132921 | orchestrator | Saturday 03 January 2026 00:49:55 +0000 (0:00:01.823) 0:00:48.299 ****** 2026-01-03 00:55:43.132925 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-01-03 00:55:43.132929 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-01-03 00:55:43.132933 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-01-03 00:55:43.132937 | orchestrator | 2026-01-03 00:55:43.132941 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-03 00:55:43.132945 | orchestrator | Saturday 03 January 2026 00:49:56 +0000 (0:00:01.710) 0:00:50.010 ****** 2026-01-03 00:55:43.132949 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.132955 | orchestrator | 2026-01-03 00:55:43.132961 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-01-03 00:55:43.133030 | orchestrator | Saturday 03 January 2026 00:49:57 +0000 (0:00:01.052) 0:00:51.063 ****** 2026-01-03 00:55:43.133040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-03 00:55:43.133054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-03 00:55:43.133071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-03 00:55:43.133079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:55:43.133094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:55:43.133101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:55:43.133108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:55:43.133117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:55:43.133127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:55:43.133134 | orchestrator | 2026-01-03 00:55:43.133140 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-01-03 00:55:43.133146 | orchestrator | Saturday 03 January 2026 00:50:00 +0000 (0:00:02.988) 0:00:54.051 ****** 2026-01-03 00:55:43.133159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.133166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.133177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.133184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.133191 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.133198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.133205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.133211 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.133263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.133308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.133323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.133331 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.133337 | orchestrator | 2026-01-03 00:55:43.133344 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-01-03 00:55:43.133351 | orchestrator | Saturday 03 January 2026 00:50:01 +0000 (0:00:00.914) 0:00:54.966 ****** 2026-01-03 00:55:43.133358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.133366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.133373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.133380 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.133391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.133403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.133419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.133426 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.133432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.133438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.133445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.133452 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.133459 | orchestrator | 2026-01-03 00:55:43.133466 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-03 00:55:43.133472 | orchestrator | Saturday 03 January 2026 00:50:02 +0000 (0:00:00.937) 0:00:55.904 ****** 2026-01-03 00:55:43.133480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.133495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.133509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.133516 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.133523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.133530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.133537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.133544 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.133551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.133558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.133578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.133590 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.133597 | orchestrator | 2026-01-03 00:55:43.133603 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-03 00:55:43.133610 | orchestrator | Saturday 03 January 2026 00:50:03 +0000 (0:00:00.843) 0:00:56.747 ****** 2026-01-03 00:55:43.133725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.133738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.133746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.133753 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.133760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.133767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.133786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.133793 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.133806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.133813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.133820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.133827 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.133833 | orchestrator | 2026-01-03 00:55:43.133840 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-03 00:55:43.133846 | orchestrator | Saturday 03 January 2026 00:50:04 +0000 (0:00:00.735) 0:00:57.483 ****** 2026-01-03 00:55:43.133902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.133908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.133925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.133932 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.133977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.133986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.133992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.133999 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.134005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.134074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.134094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.134101 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.134108 | orchestrator | 2026-01-03 00:55:43.134114 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-01-03 00:55:43.134121 | orchestrator | Saturday 03 January 2026 00:50:05 +0000 (0:00:01.131) 0:00:58.614 ****** 2026-01-03 00:55:43.134143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.134158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.134166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.134172 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.134179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.134185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.134197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.134203 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.134209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.134225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.134232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.134239 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.134246 | orchestrator | 2026-01-03 00:55:43.134252 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-01-03 00:55:43.134258 | orchestrator | Saturday 03 January 2026 00:50:07 +0000 (0:00:01.807) 0:01:00.421 ****** 2026-01-03 00:55:43.134265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.134271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.134283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.134289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.134336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.134354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.134361 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.134369 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.134377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.134385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.134393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.134411 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.134418 | orchestrator | 2026-01-03 00:55:43.134424 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-01-03 00:55:43.134431 | orchestrator | Saturday 03 January 2026 00:50:08 +0000 (0:00:01.022) 0:01:01.444 ****** 2026-01-03 00:55:43.134438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.134447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.134458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.134465 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.134477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.134484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.134491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.134504 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.134512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-03 00:55:43.134518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-03 00:55:43.134525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-03 00:55:43.134673 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.134688 | orchestrator | 2026-01-03 00:55:43.134694 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-01-03 00:55:43.134710 | orchestrator | Saturday 03 January 2026 00:50:08 +0000 (0:00:00.721) 0:01:02.166 ****** 2026-01-03 00:55:43.134717 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-03 00:55:43.134725 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-03 00:55:43.134740 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-03 00:55:43.134747 | orchestrator | 2026-01-03 00:55:43.134752 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-01-03 00:55:43.134759 | orchestrator | Saturday 03 January 2026 00:50:10 +0000 (0:00:01.969) 0:01:04.135 ****** 2026-01-03 00:55:43.134765 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-03 00:55:43.134773 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-03 00:55:43.134779 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-03 00:55:43.134785 | orchestrator | 2026-01-03 00:55:43.134792 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-01-03 00:55:43.134798 | orchestrator | Saturday 03 January 2026 00:50:12 +0000 (0:00:01.504) 0:01:05.639 ****** 2026-01-03 00:55:43.134804 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-03 00:55:43.134821 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-03 00:55:43.134827 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-03 00:55:43.134833 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.134839 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-03 00:55:43.134846 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-03 00:55:43.134851 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.134912 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-03 00:55:43.134922 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.134928 | orchestrator | 2026-01-03 00:55:43.134935 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-01-03 00:55:43.134942 | orchestrator | Saturday 03 January 2026 00:50:13 +0000 (0:00:00.799) 0:01:06.439 ****** 2026-01-03 00:55:43.134950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-03 00:55:43.134958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-03 00:55:43.134964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-03 00:55:43.134987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:55:43.134995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:55:43.135009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-03 00:55:43.135016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:55:43.135023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:55:43.135030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-03 00:55:43.135038 | orchestrator | 2026-01-03 00:55:43.135042 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-01-03 00:55:43.135046 | orchestrator | Saturday 03 January 2026 00:50:15 +0000 (0:00:02.613) 0:01:09.052 ****** 2026-01-03 00:55:43.135050 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.135054 | orchestrator | 2026-01-03 00:55:43.135058 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-01-03 00:55:43.135062 | orchestrator | Saturday 03 January 2026 00:50:16 +0000 (0:00:00.620) 0:01:09.672 ****** 2026-01-03 00:55:43.135070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-03 00:55:43.135083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-03 00:55:43.135088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.135092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.135096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-03 00:55:43.135101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-03 00:55:43.135107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.135131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.135140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-03 00:55:43.135144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-03 00:55:43.135148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.135152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.135156 | orchestrator | 2026-01-03 00:55:43.135160 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-01-03 00:55:43.135164 | orchestrator | Saturday 03 January 2026 00:50:21 +0000 (0:00:04.793) 0:01:14.466 ****** 2026-01-03 00:55:43.135171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-03 00:55:43.135183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-03 00:55:43.135188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.135192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.135196 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.135200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-03 00:55:43.135204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-03 00:55:43.135208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.135219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.135223 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.135239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-03 00:55:43.135288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-03 00:55:43.135299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.135308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.135313 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.135320 | orchestrator | 2026-01-03 00:55:43.135338 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-01-03 00:55:43.135345 | orchestrator | Saturday 03 January 2026 00:50:22 +0000 (0:00:01.408) 0:01:15.874 ****** 2026-01-03 00:55:43.135352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-03 00:55:43.135378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-03 00:55:43.135398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-03 00:55:43.135403 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.135409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-03 00:55:43.135444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-03 00:55:43.135452 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.135459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-03 00:55:43.135466 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.135471 | orchestrator | 2026-01-03 00:55:43.135524 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-01-03 00:55:43.135532 | orchestrator | Saturday 03 January 2026 00:50:23 +0000 (0:00:00.981) 0:01:16.856 ****** 2026-01-03 00:55:43.135538 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.135544 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.135551 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.135558 | orchestrator | 2026-01-03 00:55:43.135564 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-01-03 00:55:43.135571 | orchestrator | Saturday 03 January 2026 00:50:25 +0000 (0:00:01.465) 0:01:18.321 ****** 2026-01-03 00:55:43.135577 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.135583 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.135588 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.135594 | orchestrator | 2026-01-03 00:55:43.135600 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-01-03 00:55:43.135606 | orchestrator | Saturday 03 January 2026 00:50:27 +0000 (0:00:02.076) 0:01:20.398 ****** 2026-01-03 00:55:43.135612 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.135641 | orchestrator | 2026-01-03 00:55:43.135648 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-01-03 00:55:43.135654 | orchestrator | Saturday 03 January 2026 00:50:28 +0000 (0:00:00.885) 0:01:21.284 ****** 2026-01-03 00:55:43.135659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-03 00:55:43.135665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.135678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.135691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-03 00:55:43.135704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.135710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.135716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-03 00:55:43.135728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.135734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.135741 | orchestrator | 2026-01-03 00:55:43.135745 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-01-03 00:55:43.135749 | orchestrator | Saturday 03 January 2026 00:50:32 +0000 (0:00:04.280) 0:01:25.564 ****** 2026-01-03 00:55:43.135760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-03 00:55:43.135764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.135768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.135773 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.135779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-03 00:55:43.135789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.135800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-03 00:55:43.135808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.135813 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.135836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.135843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.135859 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.135867 | orchestrator | 2026-01-03 00:55:43.135874 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-01-03 00:55:43.135881 | orchestrator | Saturday 03 January 2026 00:50:32 +0000 (0:00:00.634) 0:01:26.198 ****** 2026-01-03 00:55:43.135888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-03 00:55:43.135895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-03 00:55:43.135903 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.135909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-03 00:55:43.135915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-03 00:55:43.135921 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.135927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-03 00:55:43.135933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-03 00:55:43.136032 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.136043 | orchestrator | 2026-01-03 00:55:43.136050 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-01-03 00:55:43.136056 | orchestrator | Saturday 03 January 2026 00:50:34 +0000 (0:00:01.100) 0:01:27.299 ****** 2026-01-03 00:55:43.136069 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.136075 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.136081 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.136087 | orchestrator | 2026-01-03 00:55:43.136093 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-01-03 00:55:43.136100 | orchestrator | Saturday 03 January 2026 00:50:35 +0000 (0:00:01.761) 0:01:29.060 ****** 2026-01-03 00:55:43.136105 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.136111 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.136116 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.136122 | orchestrator | 2026-01-03 00:55:43.136188 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-01-03 00:55:43.136197 | orchestrator | Saturday 03 January 2026 00:50:38 +0000 (0:00:02.542) 0:01:31.603 ****** 2026-01-03 00:55:43.136204 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.136211 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.136217 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.136224 | orchestrator | 2026-01-03 00:55:43.136230 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-01-03 00:55:43.136237 | orchestrator | Saturday 03 January 2026 00:50:38 +0000 (0:00:00.244) 0:01:31.847 ****** 2026-01-03 00:55:43.136243 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.136250 | orchestrator | 2026-01-03 00:55:43.136256 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-01-03 00:55:43.136276 | orchestrator | Saturday 03 January 2026 00:50:39 +0000 (0:00:00.685) 0:01:32.533 ****** 2026-01-03 00:55:43.136285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-03 00:55:43.136295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-03 00:55:43.136302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-03 00:55:43.136309 | orchestrator | 2026-01-03 00:55:43.136314 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-01-03 00:55:43.136318 | orchestrator | Saturday 03 January 2026 00:50:42 +0000 (0:00:03.222) 0:01:35.755 ****** 2026-01-03 00:55:43.136332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-03 00:55:43.136336 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.136340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-03 00:55:43.136349 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.136353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-03 00:55:43.136357 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.136361 | orchestrator | 2026-01-03 00:55:43.136365 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-01-03 00:55:43.136368 | orchestrator | Saturday 03 January 2026 00:50:44 +0000 (0:00:01.861) 0:01:37.617 ****** 2026-01-03 00:55:43.136374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-03 00:55:43.136381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-03 00:55:43.136386 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.136390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-03 00:55:43.136397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-03 00:55:43.136401 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.136408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-03 00:55:43.136416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-03 00:55:43.136420 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.136424 | orchestrator | 2026-01-03 00:55:43.136428 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-01-03 00:55:43.136432 | orchestrator | Saturday 03 January 2026 00:50:46 +0000 (0:00:02.041) 0:01:39.658 ****** 2026-01-03 00:55:43.136436 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.136440 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.136443 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.136448 | orchestrator | 2026-01-03 00:55:43.136452 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-01-03 00:55:43.136456 | orchestrator | Saturday 03 January 2026 00:50:47 +0000 (0:00:00.634) 0:01:40.293 ****** 2026-01-03 00:55:43.136460 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.136463 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.136467 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.136471 | orchestrator | 2026-01-03 00:55:43.136478 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-01-03 00:55:43.136484 | orchestrator | Saturday 03 January 2026 00:50:48 +0000 (0:00:01.061) 0:01:41.355 ****** 2026-01-03 00:55:43.136490 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.136522 | orchestrator | 2026-01-03 00:55:43.136528 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-01-03 00:55:43.136534 | orchestrator | Saturday 03 January 2026 00:50:48 +0000 (0:00:00.655) 0:01:42.010 ****** 2026-01-03 00:55:43.136541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-03 00:55:43.136549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.136566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.136587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.136593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-03 00:55:43.136600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.136607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.136613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.136649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-03 00:55:43.136657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.136664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.136671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.136677 | orchestrator | 2026-01-03 00:55:43.136684 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-01-03 00:55:43.136690 | orchestrator | Saturday 03 January 2026 00:50:53 +0000 (0:00:04.973) 0:01:46.983 ****** 2026-01-03 00:55:43.136697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-03 00:55:43.136709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.136724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.136732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.136821 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.136828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-03 00:55:43.136836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.136843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.136862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.136869 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.136881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-03 00:55:43.136889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.136896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.136904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.136915 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.136922 | orchestrator | 2026-01-03 00:55:43.136929 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-01-03 00:55:43.136936 | orchestrator | Saturday 03 January 2026 00:50:55 +0000 (0:00:01.362) 0:01:48.346 ****** 2026-01-03 00:55:43.136944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-03 00:55:43.136954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-03 00:55:43.136960 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.136967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-03 00:55:43.136977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-03 00:55:43.136985 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.136992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-03 00:55:43.137003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-03 00:55:43.137011 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.137018 | orchestrator | 2026-01-03 00:55:43.137025 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-01-03 00:55:43.137032 | orchestrator | Saturday 03 January 2026 00:50:56 +0000 (0:00:01.197) 0:01:49.543 ****** 2026-01-03 00:55:43.137039 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.137046 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.137053 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.137060 | orchestrator | 2026-01-03 00:55:43.137068 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-01-03 00:55:43.137075 | orchestrator | Saturday 03 January 2026 00:50:57 +0000 (0:00:01.399) 0:01:50.943 ****** 2026-01-03 00:55:43.137082 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.137089 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.137096 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.137103 | orchestrator | 2026-01-03 00:55:43.137110 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-01-03 00:55:43.137117 | orchestrator | Saturday 03 January 2026 00:50:59 +0000 (0:00:02.296) 0:01:53.240 ****** 2026-01-03 00:55:43.137124 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.137131 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.137138 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.137145 | orchestrator | 2026-01-03 00:55:43.137152 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-01-03 00:55:43.137159 | orchestrator | Saturday 03 January 2026 00:51:00 +0000 (0:00:00.641) 0:01:53.881 ****** 2026-01-03 00:55:43.137164 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.137170 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.137179 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.137187 | orchestrator | 2026-01-03 00:55:43.137193 | orchestrator | TASK [include_role : designate] ************************************************ 2026-01-03 00:55:43.137209 | orchestrator | Saturday 03 January 2026 00:51:00 +0000 (0:00:00.330) 0:01:54.212 ****** 2026-01-03 00:55:43.137216 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.137222 | orchestrator | 2026-01-03 00:55:43.137229 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-01-03 00:55:43.137239 | orchestrator | Saturday 03 January 2026 00:51:01 +0000 (0:00:00.851) 0:01:55.063 ****** 2026-01-03 00:55:43.137247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-03 00:55:43.137255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-03 00:55:43.137269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.137283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.137291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.137299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.137311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.137319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-03 00:55:43.137388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-03 00:55:43.137406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.137414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.137422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.137435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-03 00:55:43.137443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-03 00:55:43.137450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.137466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.137478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.137486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.137498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.137505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.137563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.137571 | orchestrator | 2026-01-03 00:55:43.137579 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-01-03 00:55:43.137586 | orchestrator | Saturday 03 January 2026 00:51:06 +0000 (0:00:04.995) 0:02:00.059 ****** 2026-01-03 00:55:43.137597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-03 00:55:43.138534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-03 00:55:43.138582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.138591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.138599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.138605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.138612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.138636 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.138659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-03 00:55:43.138664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-03 00:55:43.138672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.138676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.138680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.138684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.138691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.138695 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.138703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-03 00:55:43.138711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-03 00:55:43.138716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.138723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.138727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.138731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.138742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.138749 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.138753 | orchestrator | 2026-01-03 00:55:43.138758 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-01-03 00:55:43.138762 | orchestrator | Saturday 03 January 2026 00:51:07 +0000 (0:00:00.990) 0:02:01.049 ****** 2026-01-03 00:55:43.138766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-03 00:55:43.138771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-03 00:55:43.138776 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.138780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-03 00:55:43.138784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-03 00:55:43.138788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-03 00:55:43.138792 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.138795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-03 00:55:43.138799 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.138803 | orchestrator | 2026-01-03 00:55:43.138807 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-01-03 00:55:43.138811 | orchestrator | Saturday 03 January 2026 00:51:08 +0000 (0:00:01.175) 0:02:02.225 ****** 2026-01-03 00:55:43.138815 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.138819 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.138822 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.138826 | orchestrator | 2026-01-03 00:55:43.138830 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-01-03 00:55:43.138834 | orchestrator | Saturday 03 January 2026 00:51:10 +0000 (0:00:01.914) 0:02:04.140 ****** 2026-01-03 00:55:43.138838 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.138841 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.138845 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.138849 | orchestrator | 2026-01-03 00:55:43.138853 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-01-03 00:55:43.138857 | orchestrator | Saturday 03 January 2026 00:51:12 +0000 (0:00:01.963) 0:02:06.103 ****** 2026-01-03 00:55:43.138861 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.138864 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.138868 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.138872 | orchestrator | 2026-01-03 00:55:43.138876 | orchestrator | TASK [include_role : glance] *************************************************** 2026-01-03 00:55:43.138880 | orchestrator | Saturday 03 January 2026 00:51:13 +0000 (0:00:00.563) 0:02:06.667 ****** 2026-01-03 00:55:43.138884 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.138888 | orchestrator | 2026-01-03 00:55:43.138892 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-01-03 00:55:43.138895 | orchestrator | Saturday 03 January 2026 00:51:14 +0000 (0:00:00.830) 0:02:07.497 ****** 2026-01-03 00:55:43.138912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-03 00:55:43.138918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-03 00:55:43.138925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-03 00:55:43.138939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-03 00:55:43.138944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-03 00:55:43.138957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-03 00:55:43.138962 | orchestrator | 2026-01-03 00:55:43.138966 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-01-03 00:55:43.138970 | orchestrator | Saturday 03 January 2026 00:51:18 +0000 (0:00:04.251) 0:02:11.749 ****** 2026-01-03 00:55:43.138974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-03 00:55:43.138987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-03 00:55:43.138993 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.138997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-03 00:55:43.139009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-03 00:55:43.139040 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.139046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-03 00:55:43.139054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-03 00:55:43.139063 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.139068 | orchestrator | 2026-01-03 00:55:43.139072 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-01-03 00:55:43.139077 | orchestrator | Saturday 03 January 2026 00:51:21 +0000 (0:00:03.299) 0:02:15.048 ****** 2026-01-03 00:55:43.139082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-03 00:55:43.139088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-03 00:55:43.139093 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.139098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-03 00:55:43.139102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-03 00:55:43.139113 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.139143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-03 00:55:43.139173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-03 00:55:43.139181 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.139188 | orchestrator | 2026-01-03 00:55:43.139222 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-01-03 00:55:43.139227 | orchestrator | Saturday 03 January 2026 00:51:24 +0000 (0:00:02.598) 0:02:17.646 ****** 2026-01-03 00:55:43.139232 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.139236 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.139245 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.139252 | orchestrator | 2026-01-03 00:55:43.139274 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-01-03 00:55:43.139281 | orchestrator | Saturday 03 January 2026 00:51:25 +0000 (0:00:01.335) 0:02:18.982 ****** 2026-01-03 00:55:43.139288 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.139293 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.139297 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.139301 | orchestrator | 2026-01-03 00:55:43.139305 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-01-03 00:55:43.139312 | orchestrator | Saturday 03 January 2026 00:51:27 +0000 (0:00:01.930) 0:02:20.913 ****** 2026-01-03 00:55:43.139316 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.139320 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.139323 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.139327 | orchestrator | 2026-01-03 00:55:43.139331 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-01-03 00:55:43.139335 | orchestrator | Saturday 03 January 2026 00:51:28 +0000 (0:00:00.408) 0:02:21.321 ****** 2026-01-03 00:55:43.139339 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.139342 | orchestrator | 2026-01-03 00:55:43.139346 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-01-03 00:55:43.139350 | orchestrator | Saturday 03 January 2026 00:51:28 +0000 (0:00:00.810) 0:02:22.132 ****** 2026-01-03 00:55:43.139354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-03 00:55:43.139364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-03 00:55:43.139368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-03 00:55:43.139372 | orchestrator | 2026-01-03 00:55:43.139376 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-01-03 00:55:43.139380 | orchestrator | Saturday 03 January 2026 00:51:32 +0000 (0:00:03.740) 0:02:25.872 ****** 2026-01-03 00:55:43.139384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-03 00:55:43.139393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-03 00:55:43.139398 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.139402 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.139405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-03 00:55:43.139409 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.139413 | orchestrator | 2026-01-03 00:55:43.139417 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-01-03 00:55:43.139425 | orchestrator | Saturday 03 January 2026 00:51:33 +0000 (0:00:00.715) 0:02:26.588 ****** 2026-01-03 00:55:43.139429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-03 00:55:43.139433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-03 00:55:43.139437 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.139441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-03 00:55:43.139445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-03 00:55:43.139449 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.139452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-03 00:55:43.139456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-03 00:55:43.139460 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.139464 | orchestrator | 2026-01-03 00:55:43.139468 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-01-03 00:55:43.139472 | orchestrator | Saturday 03 January 2026 00:51:33 +0000 (0:00:00.625) 0:02:27.213 ****** 2026-01-03 00:55:43.139476 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.139479 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.139483 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.139487 | orchestrator | 2026-01-03 00:55:43.139491 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-01-03 00:55:43.139495 | orchestrator | Saturday 03 January 2026 00:51:35 +0000 (0:00:01.422) 0:02:28.635 ****** 2026-01-03 00:55:43.139499 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.139502 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.139506 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.139510 | orchestrator | 2026-01-03 00:55:43.139514 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-01-03 00:55:43.139518 | orchestrator | Saturday 03 January 2026 00:51:37 +0000 (0:00:02.158) 0:02:30.794 ****** 2026-01-03 00:55:43.139521 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.139525 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.139529 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.139533 | orchestrator | 2026-01-03 00:55:43.139536 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-01-03 00:55:43.139540 | orchestrator | Saturday 03 January 2026 00:51:38 +0000 (0:00:00.533) 0:02:31.328 ****** 2026-01-03 00:55:43.139544 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.139548 | orchestrator | 2026-01-03 00:55:43.139552 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-01-03 00:55:43.139555 | orchestrator | Saturday 03 January 2026 00:51:39 +0000 (0:00:00.956) 0:02:32.284 ****** 2026-01-03 00:55:43.139567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-03 00:55:43.139583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-03 00:55:43.139599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-03 00:55:43.139613 | orchestrator | 2026-01-03 00:55:43.139676 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-01-03 00:55:43.139683 | orchestrator | Saturday 03 January 2026 00:51:42 +0000 (0:00:03.786) 0:02:36.071 ****** 2026-01-03 00:55:43.139698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-03 00:55:43.139710 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.139716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-03 00:55:43.139722 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.139737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-03 00:55:43.139748 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.139754 | orchestrator | 2026-01-03 00:55:43.139760 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-01-03 00:55:43.139767 | orchestrator | Saturday 03 January 2026 00:51:44 +0000 (0:00:01.414) 0:02:37.485 ****** 2026-01-03 00:55:43.140164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-03 00:55:43.140189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-03 00:55:43.140198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-03 00:55:43.140205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-03 00:55:43.140212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-03 00:55:43.140222 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.140231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-03 00:55:43.140237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-03 00:55:43.140244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-03 00:55:43.140250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-03 00:55:43.140265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-03 00:55:43.140272 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.140284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-03 00:55:43.140291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-03 00:55:43.140295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-03 00:55:43.140305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-03 00:55:43.140309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-03 00:55:43.140313 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.140317 | orchestrator | 2026-01-03 00:55:43.140320 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-01-03 00:55:43.140325 | orchestrator | Saturday 03 January 2026 00:51:45 +0000 (0:00:01.117) 0:02:38.603 ****** 2026-01-03 00:55:43.140329 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.140333 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.140336 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.140340 | orchestrator | 2026-01-03 00:55:43.140344 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-01-03 00:55:43.140348 | orchestrator | Saturday 03 January 2026 00:51:46 +0000 (0:00:01.391) 0:02:39.994 ****** 2026-01-03 00:55:43.140352 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.140355 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.140359 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.140363 | orchestrator | 2026-01-03 00:55:43.140367 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-01-03 00:55:43.140371 | orchestrator | Saturday 03 January 2026 00:51:49 +0000 (0:00:02.288) 0:02:42.282 ****** 2026-01-03 00:55:43.140374 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.140378 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.140382 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.140386 | orchestrator | 2026-01-03 00:55:43.140390 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-01-03 00:55:43.140393 | orchestrator | Saturday 03 January 2026 00:51:49 +0000 (0:00:00.402) 0:02:42.685 ****** 2026-01-03 00:55:43.140397 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.140401 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.140405 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.140409 | orchestrator | 2026-01-03 00:55:43.140412 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-01-03 00:55:43.140421 | orchestrator | Saturday 03 January 2026 00:51:50 +0000 (0:00:00.629) 0:02:43.314 ****** 2026-01-03 00:55:43.140425 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.140429 | orchestrator | 2026-01-03 00:55:43.140433 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-01-03 00:55:43.140436 | orchestrator | Saturday 03 January 2026 00:51:51 +0000 (0:00:01.014) 0:02:44.329 ****** 2026-01-03 00:55:43.140441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-03 00:55:43.140453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:55:43.140465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-03 00:55:43.140472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:55:43.140479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:55:43.140491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-03 00:55:43.140501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:55:43.140508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:55:43.140519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:55:43.140526 | orchestrator | 2026-01-03 00:55:43.140532 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-01-03 00:55:43.140537 | orchestrator | Saturday 03 January 2026 00:51:55 +0000 (0:00:03.931) 0:02:48.260 ****** 2026-01-03 00:55:43.140541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-03 00:55:43.140549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:55:43.140556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:55:43.140562 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.140572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-03 00:55:43.140583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:55:43.140589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:55:43.140602 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.140609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-03 00:55:43.140615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:55:43.140658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:55:43.140664 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.140670 | orchestrator | 2026-01-03 00:55:43.140675 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-01-03 00:55:43.140681 | orchestrator | Saturday 03 January 2026 00:51:55 +0000 (0:00:00.949) 0:02:49.210 ****** 2026-01-03 00:55:43.140720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-03 00:55:43.140736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-03 00:55:43.140744 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.140758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-03 00:55:43.140793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-03 00:55:43.140822 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.140829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-03 00:55:43.140836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-03 00:55:43.140843 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.140850 | orchestrator | 2026-01-03 00:55:43.140856 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-01-03 00:55:43.140860 | orchestrator | Saturday 03 January 2026 00:51:56 +0000 (0:00:00.814) 0:02:50.024 ****** 2026-01-03 00:55:43.140864 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.140869 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.140873 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.140878 | orchestrator | 2026-01-03 00:55:43.140882 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-01-03 00:55:43.140886 | orchestrator | Saturday 03 January 2026 00:51:58 +0000 (0:00:01.246) 0:02:51.270 ****** 2026-01-03 00:55:43.140891 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.140895 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.140899 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.140904 | orchestrator | 2026-01-03 00:55:43.140908 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-01-03 00:55:43.140912 | orchestrator | Saturday 03 January 2026 00:52:00 +0000 (0:00:02.008) 0:02:53.279 ****** 2026-01-03 00:55:43.140917 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.140921 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.140925 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.140930 | orchestrator | 2026-01-03 00:55:43.140934 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-01-03 00:55:43.140938 | orchestrator | Saturday 03 January 2026 00:52:00 +0000 (0:00:00.574) 0:02:53.854 ****** 2026-01-03 00:55:43.140943 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.140947 | orchestrator | 2026-01-03 00:55:43.140957 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-01-03 00:55:43.140962 | orchestrator | Saturday 03 January 2026 00:52:01 +0000 (0:00:00.987) 0:02:54.841 ****** 2026-01-03 00:55:43.140971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-03 00:55:43.140977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.140992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-03 00:55:43.140997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.141002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-03 00:55:43.141009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.141014 | orchestrator | 2026-01-03 00:55:43.141018 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-01-03 00:55:43.141023 | orchestrator | Saturday 03 January 2026 00:52:05 +0000 (0:00:03.563) 0:02:58.405 ****** 2026-01-03 00:55:43.141028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-03 00:55:43.141039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.141043 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.141048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-03 00:55:43.141053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.141057 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.141064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-03 00:55:43.141072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.141076 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.141081 | orchestrator | 2026-01-03 00:55:43.141088 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-01-03 00:55:43.141092 | orchestrator | Saturday 03 January 2026 00:52:06 +0000 (0:00:01.043) 0:02:59.449 ****** 2026-01-03 00:55:43.141098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-03 00:55:43.141102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-03 00:55:43.141107 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.141111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-03 00:55:43.141116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-03 00:55:43.141120 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.141125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-03 00:55:43.141129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-03 00:55:43.141134 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.141138 | orchestrator | 2026-01-03 00:55:43.141143 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-01-03 00:55:43.141147 | orchestrator | Saturday 03 January 2026 00:52:07 +0000 (0:00:00.913) 0:03:00.363 ****** 2026-01-03 00:55:43.141152 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.141156 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.141161 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.141165 | orchestrator | 2026-01-03 00:55:43.141169 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-01-03 00:55:43.141173 | orchestrator | Saturday 03 January 2026 00:52:08 +0000 (0:00:01.259) 0:03:01.622 ****** 2026-01-03 00:55:43.141177 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.141181 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.141184 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.141188 | orchestrator | 2026-01-03 00:55:43.141192 | orchestrator | TASK [include_role : manila] *************************************************** 2026-01-03 00:55:43.141196 | orchestrator | Saturday 03 January 2026 00:52:10 +0000 (0:00:02.250) 0:03:03.873 ****** 2026-01-03 00:55:43.141200 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.141207 | orchestrator | 2026-01-03 00:55:43.141211 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-01-03 00:55:43.141215 | orchestrator | Saturday 03 January 2026 00:52:11 +0000 (0:00:01.310) 0:03:05.183 ****** 2026-01-03 00:55:43.141222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-03 00:55:43.141226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.141233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.141238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.141242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-03 00:55:43.141246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.141255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.141259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.141267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-03 00:55:43.141271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.141275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.141279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.141286 | orchestrator | 2026-01-03 00:55:43.141290 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-01-03 00:55:43.141294 | orchestrator | Saturday 03 January 2026 00:52:15 +0000 (0:00:04.012) 0:03:09.196 ****** 2026-01-03 00:55:43.141300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-03 00:55:43.141306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.141346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.141351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.141355 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.141359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-03 00:55:43.141368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.141374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.141379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.141383 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.141390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-03 00:55:43.141394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.141398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.141405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.141409 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.141413 | orchestrator | 2026-01-03 00:55:43.141417 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-01-03 00:55:43.141420 | orchestrator | Saturday 03 January 2026 00:52:16 +0000 (0:00:00.750) 0:03:09.946 ****** 2026-01-03 00:55:43.141424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-03 00:55:43.141431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-03 00:55:43.141435 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.141439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-03 00:55:43.141443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-03 00:55:43.141447 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.141451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-03 00:55:43.141454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-03 00:55:43.141458 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.141462 | orchestrator | 2026-01-03 00:55:43.141466 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-01-03 00:55:43.141470 | orchestrator | Saturday 03 January 2026 00:52:17 +0000 (0:00:01.215) 0:03:11.162 ****** 2026-01-03 00:55:43.141476 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.141480 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.141484 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.141488 | orchestrator | 2026-01-03 00:55:43.141491 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-01-03 00:55:43.141495 | orchestrator | Saturday 03 January 2026 00:52:19 +0000 (0:00:01.409) 0:03:12.572 ****** 2026-01-03 00:55:43.141499 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.141503 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.141507 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.141511 | orchestrator | 2026-01-03 00:55:43.141514 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-01-03 00:55:43.141518 | orchestrator | Saturday 03 January 2026 00:52:21 +0000 (0:00:02.220) 0:03:14.792 ****** 2026-01-03 00:55:43.141525 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.141529 | orchestrator | 2026-01-03 00:55:43.141532 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-01-03 00:55:43.141536 | orchestrator | Saturday 03 January 2026 00:52:22 +0000 (0:00:01.288) 0:03:16.081 ****** 2026-01-03 00:55:43.141540 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-03 00:55:43.141544 | orchestrator | 2026-01-03 00:55:43.141548 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-01-03 00:55:43.141552 | orchestrator | Saturday 03 January 2026 00:52:26 +0000 (0:00:03.489) 0:03:19.571 ****** 2026-01-03 00:55:43.141557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:55:43.141564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-03 00:55:43.141568 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.141575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:55:43.141583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-03 00:55:43.141587 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.141593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:55:43.141602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-03 00:55:43.141609 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.141613 | orchestrator | 2026-01-03 00:55:43.141653 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-01-03 00:55:43.141660 | orchestrator | Saturday 03 January 2026 00:52:28 +0000 (0:00:02.118) 0:03:21.690 ****** 2026-01-03 00:55:43.141666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:55:43.141673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-03 00:55:43.141679 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.142054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:55:43.142093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-03 00:55:43.142101 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.142112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:55:43.142119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-03 00:55:43.142126 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.142132 | orchestrator | 2026-01-03 00:55:43.142138 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-01-03 00:55:43.142151 | orchestrator | Saturday 03 January 2026 00:52:30 +0000 (0:00:02.345) 0:03:24.035 ****** 2026-01-03 00:55:43.142213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-03 00:55:43.142221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-03 00:55:43.142227 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.142234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-03 00:55:43.142239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-03 00:55:43.142249 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.142257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-03 00:55:43.142267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-03 00:55:43.142273 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.142280 | orchestrator | 2026-01-03 00:55:43.142292 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-01-03 00:55:43.142298 | orchestrator | Saturday 03 January 2026 00:52:33 +0000 (0:00:02.537) 0:03:26.573 ****** 2026-01-03 00:55:43.142305 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.142310 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.142313 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.142317 | orchestrator | 2026-01-03 00:55:43.142321 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-01-03 00:55:43.142325 | orchestrator | Saturday 03 January 2026 00:52:34 +0000 (0:00:01.649) 0:03:28.222 ****** 2026-01-03 00:55:43.142329 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.142333 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.142336 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.142340 | orchestrator | 2026-01-03 00:55:43.142344 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-01-03 00:55:43.142348 | orchestrator | Saturday 03 January 2026 00:52:36 +0000 (0:00:01.215) 0:03:29.438 ****** 2026-01-03 00:55:43.142409 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.142416 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.142420 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.142425 | orchestrator | 2026-01-03 00:55:43.142431 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-01-03 00:55:43.142437 | orchestrator | Saturday 03 January 2026 00:52:36 +0000 (0:00:00.287) 0:03:29.725 ****** 2026-01-03 00:55:43.142443 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.142450 | orchestrator | 2026-01-03 00:55:43.142781 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-01-03 00:55:43.142796 | orchestrator | Saturday 03 January 2026 00:52:37 +0000 (0:00:01.160) 0:03:30.885 ****** 2026-01-03 00:55:43.142804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-03 00:55:43.142812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-03 00:55:43.142820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-03 00:55:43.142842 | orchestrator | 2026-01-03 00:55:43.142970 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-01-03 00:55:43.142976 | orchestrator | Saturday 03 January 2026 00:52:39 +0000 (0:00:01.415) 0:03:32.301 ****** 2026-01-03 00:55:43.143003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-03 00:55:43.143065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-03 00:55:43.143076 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.143083 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.143090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-03 00:55:43.143096 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.143102 | orchestrator | 2026-01-03 00:55:43.143107 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-01-03 00:55:43.143113 | orchestrator | Saturday 03 January 2026 00:52:39 +0000 (0:00:00.407) 0:03:32.708 ****** 2026-01-03 00:55:43.143120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-03 00:55:43.143127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-03 00:55:43.143133 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.143139 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.143145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-03 00:55:43.143159 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.143164 | orchestrator | 2026-01-03 00:55:43.143170 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-01-03 00:55:43.143176 | orchestrator | Saturday 03 January 2026 00:52:40 +0000 (0:00:00.941) 0:03:33.650 ****** 2026-01-03 00:55:43.143182 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.143187 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.143193 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.143198 | orchestrator | 2026-01-03 00:55:43.143204 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-01-03 00:55:43.143210 | orchestrator | Saturday 03 January 2026 00:52:40 +0000 (0:00:00.469) 0:03:34.119 ****** 2026-01-03 00:55:43.143216 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.143222 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.143232 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.143239 | orchestrator | 2026-01-03 00:55:43.143244 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-01-03 00:55:43.143250 | orchestrator | Saturday 03 January 2026 00:52:42 +0000 (0:00:01.433) 0:03:35.553 ****** 2026-01-03 00:55:43.143256 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.143262 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.143267 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.143273 | orchestrator | 2026-01-03 00:55:43.143279 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-01-03 00:55:43.143284 | orchestrator | Saturday 03 January 2026 00:52:42 +0000 (0:00:00.333) 0:03:35.886 ****** 2026-01-03 00:55:43.143290 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.143296 | orchestrator | 2026-01-03 00:55:43.143302 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-01-03 00:55:43.143308 | orchestrator | Saturday 03 January 2026 00:52:44 +0000 (0:00:01.580) 0:03:37.466 ****** 2026-01-03 00:55:43.143375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-03 00:55:43.143388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.143395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.143410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.143420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-03 00:55:43.143427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.143489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-03 00:55:43.143501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-03 00:55:43.143508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.143521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-03 00:55:43.143531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-03 00:55:43.143535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.143564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.143570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-03 00:55:43.143578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.143582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.143589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-03 00:55:43.143593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-03 00:55:43.143614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.143637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.143646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-03 00:55:43.143650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-03 00:55:43.143658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-03 00:55:43.143662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-03 00:55:43.143694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.143703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-03 00:55:43.143715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.143721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-03 00:55:43.143727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-03 00:55:43.143737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-03 00:55:43.143744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.143839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.143856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-03 00:55:43.143863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.143876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-03 00:55:43.143883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.143949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-03 00:55:43.143965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.143973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-03 00:55:43.143981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-03 00:55:43.143997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.144009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-03 00:55:43.144058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.144066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-03 00:55:43.144078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-03 00:55:43.144085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.144092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-03 00:55:43.144099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-03 00:55:43.144103 | orchestrator | 2026-01-03 00:55:43.144108 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-01-03 00:55:43.144113 | orchestrator | Saturday 03 January 2026 00:52:48 +0000 (0:00:04.366) 0:03:41.832 ****** 2026-01-03 00:55:43.144156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-03 00:55:43.144171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.144178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.144185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-03 00:55:43.144195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.144245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-03 00:55:43.144259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.144266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.144272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.144278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-03 00:55:43.144298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.144306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-03 00:55:43.144360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-03 00:55:43.144369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.144376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-03 00:55:43.144387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-03 00:55:43.144393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.144430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.144437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.144463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.144471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-03 00:55:43.144478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-03 00:55:43.144489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.144502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-03 00:55:43.144536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-03 00:55:43.144544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-03 00:55:43.144550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.144556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.144606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.144613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-03 00:55:43.144714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-03 00:55:43.144723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-03 00:55:43.144730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-03 00:55:43.144736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-03 00:55:43.144743 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.144753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.144765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.144811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-03 00:55:43.144818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-03 00:55:43.144825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.144831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-03 00:55:43.144838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-03 00:55:43.144849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.144881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-03 00:55:43.144907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.144915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-03 00:55:43.144922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-03 00:55:43.144929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-03 00:55:43.144940 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.144951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-03 00:55:43.144957 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.144963 | orchestrator | 2026-01-03 00:55:43.144970 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-01-03 00:55:43.144976 | orchestrator | Saturday 03 January 2026 00:52:50 +0000 (0:00:01.552) 0:03:43.385 ****** 2026-01-03 00:55:43.144984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-03 00:55:43.144993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-03 00:55:43.144999 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.145022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-03 00:55:43.145029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-03 00:55:43.145035 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.145039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-03 00:55:43.145043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-03 00:55:43.145047 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.145051 | orchestrator | 2026-01-03 00:55:43.145054 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-01-03 00:55:43.145058 | orchestrator | Saturday 03 January 2026 00:52:52 +0000 (0:00:02.378) 0:03:45.763 ****** 2026-01-03 00:55:43.145062 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.145066 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.145070 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.145073 | orchestrator | 2026-01-03 00:55:43.145077 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-01-03 00:55:43.145081 | orchestrator | Saturday 03 January 2026 00:52:54 +0000 (0:00:01.506) 0:03:47.270 ****** 2026-01-03 00:55:43.145085 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.145089 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.145092 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.145096 | orchestrator | 2026-01-03 00:55:43.145100 | orchestrator | TASK [include_role : placement] ************************************************ 2026-01-03 00:55:43.145104 | orchestrator | Saturday 03 January 2026 00:52:56 +0000 (0:00:02.231) 0:03:49.502 ****** 2026-01-03 00:55:43.145108 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.145111 | orchestrator | 2026-01-03 00:55:43.145119 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-01-03 00:55:43.145131 | orchestrator | Saturday 03 January 2026 00:52:57 +0000 (0:00:01.235) 0:03:50.737 ****** 2026-01-03 00:55:43.145135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-03 00:55:43.145143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-03 00:55:43.145162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-03 00:55:43.145167 | orchestrator | 2026-01-03 00:55:43.145171 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-01-03 00:55:43.145175 | orchestrator | Saturday 03 January 2026 00:53:01 +0000 (0:00:04.068) 0:03:54.805 ****** 2026-01-03 00:55:43.145179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-03 00:55:43.145187 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.145190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-03 00:55:43.145194 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.145201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-03 00:55:43.145205 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.145209 | orchestrator | 2026-01-03 00:55:43.145213 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-01-03 00:55:43.145216 | orchestrator | Saturday 03 January 2026 00:53:02 +0000 (0:00:00.576) 0:03:55.382 ****** 2026-01-03 00:55:43.145220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-03 00:55:43.145226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-03 00:55:43.145230 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.145246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-03 00:55:43.145253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-03 00:55:43.145259 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.145265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-03 00:55:43.145270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-03 00:55:43.145280 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.145294 | orchestrator | 2026-01-03 00:55:43.145300 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-01-03 00:55:43.145306 | orchestrator | Saturday 03 January 2026 00:53:02 +0000 (0:00:00.779) 0:03:56.162 ****** 2026-01-03 00:55:43.145313 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.145319 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.145324 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.145330 | orchestrator | 2026-01-03 00:55:43.145336 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-01-03 00:55:43.145341 | orchestrator | Saturday 03 January 2026 00:53:04 +0000 (0:00:01.380) 0:03:57.542 ****** 2026-01-03 00:55:43.145347 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.145353 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.145358 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.145364 | orchestrator | 2026-01-03 00:55:43.145370 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-01-03 00:55:43.145376 | orchestrator | Saturday 03 January 2026 00:53:06 +0000 (0:00:02.321) 0:03:59.863 ****** 2026-01-03 00:55:43.145382 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.145389 | orchestrator | 2026-01-03 00:55:43.145393 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-01-03 00:55:43.145397 | orchestrator | Saturday 03 January 2026 00:53:08 +0000 (0:00:01.526) 0:04:01.390 ****** 2026-01-03 00:55:43.145405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-03 00:55:43.145411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.145436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.145442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-03 00:55:43.145454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.145461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.145473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-03 00:55:43.145502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.145518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.145522 | orchestrator | 2026-01-03 00:55:43.145526 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-01-03 00:55:43.145530 | orchestrator | Saturday 03 January 2026 00:53:12 +0000 (0:00:04.714) 0:04:06.104 ****** 2026-01-03 00:55:43.145535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-03 00:55:43.145542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.145547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.145551 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.145569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-03 00:55:43.145577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.145582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.145585 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.145593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-03 00:55:43.145597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.145613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.145640 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.145646 | orchestrator | 2026-01-03 00:55:43.145652 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-01-03 00:55:43.145658 | orchestrator | Saturday 03 January 2026 00:53:14 +0000 (0:00:01.389) 0:04:07.493 ****** 2026-01-03 00:55:43.145665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-03 00:55:43.145671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-03 00:55:43.145677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-03 00:55:43.145684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-03 00:55:43.145690 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.145696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-03 00:55:43.145702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-03 00:55:43.145709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-03 00:55:43.145715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-03 00:55:43.145724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-03 00:55:43.145733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-03 00:55:43.145739 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.145749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-03 00:55:43.145756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-03 00:55:43.145763 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.145779 | orchestrator | 2026-01-03 00:55:43.145786 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-01-03 00:55:43.145792 | orchestrator | Saturday 03 January 2026 00:53:15 +0000 (0:00:00.913) 0:04:08.407 ****** 2026-01-03 00:55:43.145798 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.145804 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.145809 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.145815 | orchestrator | 2026-01-03 00:55:43.145821 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-01-03 00:55:43.145827 | orchestrator | Saturday 03 January 2026 00:53:16 +0000 (0:00:01.498) 0:04:09.906 ****** 2026-01-03 00:55:43.145832 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.145839 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.145845 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.145850 | orchestrator | 2026-01-03 00:55:43.145854 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-01-03 00:55:43.145858 | orchestrator | Saturday 03 January 2026 00:53:18 +0000 (0:00:02.238) 0:04:12.145 ****** 2026-01-03 00:55:43.145862 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.145866 | orchestrator | 2026-01-03 00:55:43.145870 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-01-03 00:55:43.145895 | orchestrator | Saturday 03 January 2026 00:53:20 +0000 (0:00:01.700) 0:04:13.846 ****** 2026-01-03 00:55:43.145900 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-01-03 00:55:43.145905 | orchestrator | 2026-01-03 00:55:43.145909 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-01-03 00:55:43.145912 | orchestrator | Saturday 03 January 2026 00:53:21 +0000 (0:00:00.864) 0:04:14.710 ****** 2026-01-03 00:55:43.145917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-03 00:55:43.145921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-03 00:55:43.145925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-03 00:55:43.145929 | orchestrator | 2026-01-03 00:55:43.145934 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-01-03 00:55:43.145938 | orchestrator | Saturday 03 January 2026 00:53:26 +0000 (0:00:04.795) 0:04:19.506 ****** 2026-01-03 00:55:43.145943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-03 00:55:43.145951 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.145960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-03 00:55:43.145969 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.145977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-03 00:55:43.145983 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.145989 | orchestrator | 2026-01-03 00:55:43.145996 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-01-03 00:55:43.146000 | orchestrator | Saturday 03 January 2026 00:53:27 +0000 (0:00:01.066) 0:04:20.572 ****** 2026-01-03 00:55:43.146070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-03 00:55:43.146081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-03 00:55:43.146087 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.146094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-03 00:55:43.146100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-03 00:55:43.146106 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.146112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-03 00:55:43.146117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-03 00:55:43.146123 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.146129 | orchestrator | 2026-01-03 00:55:43.146135 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-03 00:55:43.146140 | orchestrator | Saturday 03 January 2026 00:53:28 +0000 (0:00:01.636) 0:04:22.208 ****** 2026-01-03 00:55:43.146146 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.146152 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.146158 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.146170 | orchestrator | 2026-01-03 00:55:43.146176 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-03 00:55:43.146182 | orchestrator | Saturday 03 January 2026 00:53:31 +0000 (0:00:02.560) 0:04:24.769 ****** 2026-01-03 00:55:43.146188 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.146194 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.146200 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.146207 | orchestrator | 2026-01-03 00:55:43.146212 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-01-03 00:55:43.146219 | orchestrator | Saturday 03 January 2026 00:53:34 +0000 (0:00:03.215) 0:04:27.984 ****** 2026-01-03 00:55:43.146225 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-01-03 00:55:43.146232 | orchestrator | 2026-01-03 00:55:43.146243 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-01-03 00:55:43.146249 | orchestrator | Saturday 03 January 2026 00:53:36 +0000 (0:00:01.452) 0:04:29.437 ****** 2026-01-03 00:55:43.146260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-03 00:55:43.146268 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.146274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-03 00:55:43.146280 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.146312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-03 00:55:43.146317 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.146321 | orchestrator | 2026-01-03 00:55:43.146325 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-01-03 00:55:43.146329 | orchestrator | Saturday 03 January 2026 00:53:37 +0000 (0:00:01.328) 0:04:30.766 ****** 2026-01-03 00:55:43.146333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-03 00:55:43.146337 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.146341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-03 00:55:43.146350 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.146355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-03 00:55:43.146363 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.146372 | orchestrator | 2026-01-03 00:55:43.146378 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-01-03 00:55:43.146384 | orchestrator | Saturday 03 January 2026 00:53:38 +0000 (0:00:01.281) 0:04:32.047 ****** 2026-01-03 00:55:43.146390 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.146397 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.146401 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.146406 | orchestrator | 2026-01-03 00:55:43.146414 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-03 00:55:43.146423 | orchestrator | Saturday 03 January 2026 00:53:40 +0000 (0:00:01.896) 0:04:33.944 ****** 2026-01-03 00:55:43.146429 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:55:43.146435 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:55:43.146441 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:55:43.146447 | orchestrator | 2026-01-03 00:55:43.146453 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-03 00:55:43.146459 | orchestrator | Saturday 03 January 2026 00:53:43 +0000 (0:00:02.395) 0:04:36.340 ****** 2026-01-03 00:55:43.146465 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:55:43.146470 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:55:43.146476 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:55:43.146482 | orchestrator | 2026-01-03 00:55:43.146487 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-01-03 00:55:43.146493 | orchestrator | Saturday 03 January 2026 00:53:46 +0000 (0:00:03.146) 0:04:39.486 ****** 2026-01-03 00:55:43.146499 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-01-03 00:55:43.146506 | orchestrator | 2026-01-03 00:55:43.146511 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-01-03 00:55:43.146517 | orchestrator | Saturday 03 January 2026 00:53:47 +0000 (0:00:00.894) 0:04:40.381 ****** 2026-01-03 00:55:43.146523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-03 00:55:43.146530 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.146664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-03 00:55:43.146703 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.146710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-03 00:55:43.146717 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.146722 | orchestrator | 2026-01-03 00:55:43.146727 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-01-03 00:55:43.146732 | orchestrator | Saturday 03 January 2026 00:53:48 +0000 (0:00:01.504) 0:04:41.886 ****** 2026-01-03 00:55:43.146736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-03 00:55:43.146740 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.146744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-03 00:55:43.146748 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.146752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-03 00:55:43.146756 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.146759 | orchestrator | 2026-01-03 00:55:43.146763 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-01-03 00:55:43.146770 | orchestrator | Saturday 03 January 2026 00:53:50 +0000 (0:00:01.391) 0:04:43.277 ****** 2026-01-03 00:55:43.146774 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.146778 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.146782 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.146786 | orchestrator | 2026-01-03 00:55:43.146790 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-03 00:55:43.146793 | orchestrator | Saturday 03 January 2026 00:53:51 +0000 (0:00:01.607) 0:04:44.885 ****** 2026-01-03 00:55:43.146797 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:55:43.146801 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:55:43.146805 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:55:43.146809 | orchestrator | 2026-01-03 00:55:43.146813 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-03 00:55:43.146820 | orchestrator | Saturday 03 January 2026 00:53:54 +0000 (0:00:02.401) 0:04:47.286 ****** 2026-01-03 00:55:43.146824 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:55:43.146828 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:55:43.146832 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:55:43.146836 | orchestrator | 2026-01-03 00:55:43.146841 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-01-03 00:55:43.146848 | orchestrator | Saturday 03 January 2026 00:53:57 +0000 (0:00:03.220) 0:04:50.507 ****** 2026-01-03 00:55:43.146853 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.146861 | orchestrator | 2026-01-03 00:55:43.146870 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-01-03 00:55:43.146876 | orchestrator | Saturday 03 January 2026 00:53:58 +0000 (0:00:01.631) 0:04:52.139 ****** 2026-01-03 00:55:43.146913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-03 00:55:43.146921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-03 00:55:43.146927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-03 00:55:43.146939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-03 00:55:43.146945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-03 00:55:43.146957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.146980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-03 00:55:43.146988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-03 00:55:43.146994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-03 00:55:43.147001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-03 00:55:43.147012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-03 00:55:43.147024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.147048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-03 00:55:43.147056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-03 00:55:43.147062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.147069 | orchestrator | 2026-01-03 00:55:43.147075 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-01-03 00:55:43.147081 | orchestrator | Saturday 03 January 2026 00:54:02 +0000 (0:00:03.471) 0:04:55.611 ****** 2026-01-03 00:55:43.147088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-03 00:55:43.147102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-03 00:55:43.147110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-03 00:55:43.147136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-03 00:55:43.147144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.147150 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.147157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-03 00:55:43.147164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-03 00:55:43.147174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-03 00:55:43.147185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-03 00:55:43.147192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.147213 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.147221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-03 00:55:43.147227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-03 00:55:43.147234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-03 00:55:43.147240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-03 00:55:43.147255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-03 00:55:43.147262 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.147268 | orchestrator | 2026-01-03 00:55:43.147274 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-01-03 00:55:43.147281 | orchestrator | Saturday 03 January 2026 00:54:03 +0000 (0:00:00.716) 0:04:56.327 ****** 2026-01-03 00:55:43.147288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-03 00:55:43.147296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-03 00:55:43.147303 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.147327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-03 00:55:43.147334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-03 00:55:43.147340 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.147346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-03 00:55:43.147353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-03 00:55:43.147360 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.147366 | orchestrator | 2026-01-03 00:55:43.147372 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-01-03 00:55:43.147378 | orchestrator | Saturday 03 January 2026 00:54:04 +0000 (0:00:01.592) 0:04:57.920 ****** 2026-01-03 00:55:43.147385 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.147391 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.147397 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.147403 | orchestrator | 2026-01-03 00:55:43.147410 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-01-03 00:55:43.147416 | orchestrator | Saturday 03 January 2026 00:54:06 +0000 (0:00:01.463) 0:04:59.383 ****** 2026-01-03 00:55:43.147422 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.147428 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.147434 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.147448 | orchestrator | 2026-01-03 00:55:43.147455 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-01-03 00:55:43.147461 | orchestrator | Saturday 03 January 2026 00:54:08 +0000 (0:00:02.174) 0:05:01.558 ****** 2026-01-03 00:55:43.147467 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.147473 | orchestrator | 2026-01-03 00:55:43.147479 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-01-03 00:55:43.147484 | orchestrator | Saturday 03 January 2026 00:54:09 +0000 (0:00:01.451) 0:05:03.009 ****** 2026-01-03 00:55:43.147488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-03 00:55:43.147497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-03 00:55:43.147515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-03 00:55:43.147522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-03 00:55:43.147530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-03 00:55:43.147538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-03 00:55:43.147542 | orchestrator | 2026-01-03 00:55:43.147547 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-01-03 00:55:43.147550 | orchestrator | Saturday 03 January 2026 00:54:15 +0000 (0:00:06.084) 0:05:09.094 ****** 2026-01-03 00:55:43.147565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-03 00:55:43.147570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-03 00:55:43.147578 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.147582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-03 00:55:43.147590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-03 00:55:43.147597 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.147639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-03 00:55:43.147647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-03 00:55:43.147658 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.147664 | orchestrator | 2026-01-03 00:55:43.147670 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-01-03 00:55:43.147676 | orchestrator | Saturday 03 January 2026 00:54:16 +0000 (0:00:00.704) 0:05:09.798 ****** 2026-01-03 00:55:43.147682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-03 00:55:43.147692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-03 00:55:43.147700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-03 00:55:43.147707 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.147714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-03 00:55:43.147720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-03 00:55:43.147727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-03 00:55:43.147733 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.147743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-03 00:55:43.147747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-03 00:55:43.147751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-03 00:55:43.147755 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.147758 | orchestrator | 2026-01-03 00:55:43.147762 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-01-03 00:55:43.147766 | orchestrator | Saturday 03 January 2026 00:54:17 +0000 (0:00:01.020) 0:05:10.819 ****** 2026-01-03 00:55:43.147770 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.147774 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.147777 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.147781 | orchestrator | 2026-01-03 00:55:43.147785 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-01-03 00:55:43.147789 | orchestrator | Saturday 03 January 2026 00:54:18 +0000 (0:00:00.961) 0:05:11.781 ****** 2026-01-03 00:55:43.147793 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.147801 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.147805 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.147809 | orchestrator | 2026-01-03 00:55:43.147829 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-01-03 00:55:43.147835 | orchestrator | Saturday 03 January 2026 00:54:20 +0000 (0:00:01.536) 0:05:13.317 ****** 2026-01-03 00:55:43.147841 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.147850 | orchestrator | 2026-01-03 00:55:43.147859 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-01-03 00:55:43.147865 | orchestrator | Saturday 03 January 2026 00:54:21 +0000 (0:00:01.403) 0:05:14.721 ****** 2026-01-03 00:55:43.147872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-03 00:55:43.147879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 00:55:43.147886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:55:43.147897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-03 00:55:43.147904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:55:43.147911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 00:55:43.147943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 00:55:43.147952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:55:43.147956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-03 00:55:43.147960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 00:55:43.147964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:55:43.147972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:55:43.147976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 00:55:43.147995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:55:43.148000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 00:55:43.148004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-03 00:55:43.148009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-03 00:55:43.148016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:55:43.148020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:55:43.148028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 00:55:43.148109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-03 00:55:43.148115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-03 00:55:43.148119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:55:43.148123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:55:43.148133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 00:55:43.148149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-03 00:55:43.148160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-03 00:55:43.148167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:55:43.148173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:55:43.148179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 00:55:43.148187 | orchestrator | 2026-01-03 00:55:43.148191 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-01-03 00:55:43.148195 | orchestrator | Saturday 03 January 2026 00:54:26 +0000 (0:00:04.612) 0:05:19.333 ****** 2026-01-03 00:55:43.148203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-03 00:55:43.148213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 00:55:43.148222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:55:43.148226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-03 00:55:43.148230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:55:43.148234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 00:55:43.148238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 00:55:43.148248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:55:43.148256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-03 00:55:43.148260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:55:43.148267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-03 00:55:43.148274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 00:55:43.148284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:55:43.148299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-03 00:55:43.148305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:55:43.148316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-03 00:55:43.148323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-03 00:55:43.148329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 00:55:43.148334 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.148340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 00:55:43.148357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:55:43.148363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:55:43.148370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:55:43.148381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:55:43.148388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 00:55:43.148394 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.148398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 00:55:43.148402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-03 00:55:43.148423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-03 00:55:43.148427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:55:43.148435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 00:55:43.148440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 00:55:43.148445 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.148451 | orchestrator | 2026-01-03 00:55:43.148456 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-01-03 00:55:43.148465 | orchestrator | Saturday 03 January 2026 00:54:27 +0000 (0:00:01.207) 0:05:20.541 ****** 2026-01-03 00:55:43.148474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-03 00:55:43.148480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-03 00:55:43.148487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-03 00:55:43.148498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-03 00:55:43.148507 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.148511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-03 00:55:43.148515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-03 00:55:43.148519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-03 00:55:43.148527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-03 00:55:43.148531 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.148535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-03 00:55:43.148539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-03 00:55:43.148543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-03 00:55:43.148550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-03 00:55:43.148554 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.148558 | orchestrator | 2026-01-03 00:55:43.148562 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-01-03 00:55:43.148566 | orchestrator | Saturday 03 January 2026 00:54:28 +0000 (0:00:00.997) 0:05:21.539 ****** 2026-01-03 00:55:43.148570 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.148574 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.148577 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.148581 | orchestrator | 2026-01-03 00:55:43.148585 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-01-03 00:55:43.148589 | orchestrator | Saturday 03 January 2026 00:54:28 +0000 (0:00:00.483) 0:05:22.022 ****** 2026-01-03 00:55:43.148593 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.148597 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.148600 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.148604 | orchestrator | 2026-01-03 00:55:43.148608 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-01-03 00:55:43.148612 | orchestrator | Saturday 03 January 2026 00:54:30 +0000 (0:00:01.496) 0:05:23.519 ****** 2026-01-03 00:55:43.148663 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.148670 | orchestrator | 2026-01-03 00:55:43.148676 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-01-03 00:55:43.148682 | orchestrator | Saturday 03 January 2026 00:54:32 +0000 (0:00:01.805) 0:05:25.325 ****** 2026-01-03 00:55:43.148688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:55:43.148707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:55:43.148714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-03 00:55:43.148721 | orchestrator | 2026-01-03 00:55:43.148731 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-01-03 00:55:43.148738 | orchestrator | Saturday 03 January 2026 00:54:34 +0000 (0:00:02.519) 0:05:27.845 ****** 2026-01-03 00:55:43.148743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-03 00:55:43.148755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-03 00:55:43.148762 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.148768 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.148778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-03 00:55:43.148785 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.148790 | orchestrator | 2026-01-03 00:55:43.148796 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-01-03 00:55:43.148802 | orchestrator | Saturday 03 January 2026 00:54:35 +0000 (0:00:00.434) 0:05:28.279 ****** 2026-01-03 00:55:43.148809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-03 00:55:43.148816 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.148822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-03 00:55:43.148829 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.148835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-03 00:55:43.148842 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.148848 | orchestrator | 2026-01-03 00:55:43.148854 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-01-03 00:55:43.148861 | orchestrator | Saturday 03 January 2026 00:54:36 +0000 (0:00:01.027) 0:05:29.306 ****** 2026-01-03 00:55:43.148872 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.148886 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.148892 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.148898 | orchestrator | 2026-01-03 00:55:43.148904 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-01-03 00:55:43.148910 | orchestrator | Saturday 03 January 2026 00:54:36 +0000 (0:00:00.441) 0:05:29.748 ****** 2026-01-03 00:55:43.148916 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.148922 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.148928 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.148934 | orchestrator | 2026-01-03 00:55:43.148940 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-01-03 00:55:43.148946 | orchestrator | Saturday 03 January 2026 00:54:37 +0000 (0:00:01.365) 0:05:31.113 ****** 2026-01-03 00:55:43.148952 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:55:43.148958 | orchestrator | 2026-01-03 00:55:43.148964 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-01-03 00:55:43.148969 | orchestrator | Saturday 03 January 2026 00:54:39 +0000 (0:00:01.765) 0:05:32.879 ****** 2026-01-03 00:55:43.148975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-03 00:55:43.148983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-03 00:55:43.148993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-03 00:55:43.149005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-03 00:55:43.149019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-03 00:55:43.149026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-03 00:55:43.149033 | orchestrator | 2026-01-03 00:55:43.149039 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-01-03 00:55:43.149045 | orchestrator | Saturday 03 January 2026 00:54:46 +0000 (0:00:06.841) 0:05:39.721 ****** 2026-01-03 00:55:43.149056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-03 00:55:43.149068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-03 00:55:43.149080 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.149087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-03 00:55:43.149094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-03 00:55:43.149100 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.149110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-03 00:55:43.149117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-03 00:55:43.149128 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.149134 | orchestrator | 2026-01-03 00:55:43.149140 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-01-03 00:55:43.149147 | orchestrator | Saturday 03 January 2026 00:54:47 +0000 (0:00:00.665) 0:05:40.386 ****** 2026-01-03 00:55:43.149151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-03 00:55:43.149156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-03 00:55:43.149161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-03 00:55:43.149165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-03 00:55:43.149169 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.149173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-03 00:55:43.149176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-03 00:55:43.149180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-03 00:55:43.149184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-03 00:55:43.149188 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.149192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-03 00:55:43.149196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-03 00:55:43.149200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-03 00:55:43.149204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-03 00:55:43.149211 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.149215 | orchestrator | 2026-01-03 00:55:43.149222 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-01-03 00:55:43.149226 | orchestrator | Saturday 03 January 2026 00:54:48 +0000 (0:00:01.724) 0:05:42.110 ****** 2026-01-03 00:55:43.149229 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.149233 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.149237 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.149241 | orchestrator | 2026-01-03 00:55:43.149245 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-01-03 00:55:43.149248 | orchestrator | Saturday 03 January 2026 00:54:50 +0000 (0:00:01.469) 0:05:43.579 ****** 2026-01-03 00:55:43.149252 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.149256 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.149260 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.149263 | orchestrator | 2026-01-03 00:55:43.149267 | orchestrator | TASK [include_role : swift] **************************************************** 2026-01-03 00:55:43.149271 | orchestrator | Saturday 03 January 2026 00:54:52 +0000 (0:00:02.225) 0:05:45.805 ****** 2026-01-03 00:55:43.149275 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.149278 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.149282 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.149286 | orchestrator | 2026-01-03 00:55:43.149290 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-01-03 00:55:43.149294 | orchestrator | Saturday 03 January 2026 00:54:52 +0000 (0:00:00.326) 0:05:46.131 ****** 2026-01-03 00:55:43.149297 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.149301 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.149305 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.149309 | orchestrator | 2026-01-03 00:55:43.149313 | orchestrator | TASK [include_role : trove] **************************************************** 2026-01-03 00:55:43.149318 | orchestrator | Saturday 03 January 2026 00:54:53 +0000 (0:00:00.320) 0:05:46.452 ****** 2026-01-03 00:55:43.149322 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.149326 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.149330 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.149334 | orchestrator | 2026-01-03 00:55:43.149337 | orchestrator | TASK [include_role : venus] **************************************************** 2026-01-03 00:55:43.149341 | orchestrator | Saturday 03 January 2026 00:54:53 +0000 (0:00:00.716) 0:05:47.169 ****** 2026-01-03 00:55:43.149345 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.149349 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.149353 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.149356 | orchestrator | 2026-01-03 00:55:43.149360 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-01-03 00:55:43.149364 | orchestrator | Saturday 03 January 2026 00:54:54 +0000 (0:00:00.331) 0:05:47.501 ****** 2026-01-03 00:55:43.149368 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.149371 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.149375 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.149379 | orchestrator | 2026-01-03 00:55:43.149383 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-01-03 00:55:43.149386 | orchestrator | Saturday 03 January 2026 00:54:54 +0000 (0:00:00.310) 0:05:47.811 ****** 2026-01-03 00:55:43.149390 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.149394 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.149398 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.149402 | orchestrator | 2026-01-03 00:55:43.149405 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-01-03 00:55:43.149409 | orchestrator | Saturday 03 January 2026 00:54:55 +0000 (0:00:00.911) 0:05:48.723 ****** 2026-01-03 00:55:43.149413 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:55:43.149420 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:55:43.149424 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:55:43.149428 | orchestrator | 2026-01-03 00:55:43.149432 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-01-03 00:55:43.149436 | orchestrator | Saturday 03 January 2026 00:54:56 +0000 (0:00:00.669) 0:05:49.392 ****** 2026-01-03 00:55:43.149442 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:55:43.149448 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:55:43.149456 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:55:43.149464 | orchestrator | 2026-01-03 00:55:43.149470 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-01-03 00:55:43.149475 | orchestrator | Saturday 03 January 2026 00:54:56 +0000 (0:00:00.357) 0:05:49.750 ****** 2026-01-03 00:55:43.149481 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:55:43.149487 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:55:43.149494 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:55:43.149498 | orchestrator | 2026-01-03 00:55:43.149502 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-01-03 00:55:43.149506 | orchestrator | Saturday 03 January 2026 00:54:57 +0000 (0:00:00.845) 0:05:50.596 ****** 2026-01-03 00:55:43.149510 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:55:43.149514 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:55:43.149517 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:55:43.149521 | orchestrator | 2026-01-03 00:55:43.149525 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-01-03 00:55:43.149529 | orchestrator | Saturday 03 January 2026 00:54:58 +0000 (0:00:01.249) 0:05:51.846 ****** 2026-01-03 00:55:43.149533 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:55:43.149536 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:55:43.149540 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:55:43.149544 | orchestrator | 2026-01-03 00:55:43.149548 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-01-03 00:55:43.149552 | orchestrator | Saturday 03 January 2026 00:54:59 +0000 (0:00:00.862) 0:05:52.708 ****** 2026-01-03 00:55:43.149555 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.149559 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.149563 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.149567 | orchestrator | 2026-01-03 00:55:43.149571 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-01-03 00:55:43.149575 | orchestrator | Saturday 03 January 2026 00:55:09 +0000 (0:00:10.000) 0:06:02.708 ****** 2026-01-03 00:55:43.149578 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:55:43.149582 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:55:43.149588 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:55:43.149594 | orchestrator | 2026-01-03 00:55:43.149604 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-01-03 00:55:43.149614 | orchestrator | Saturday 03 January 2026 00:55:10 +0000 (0:00:00.754) 0:06:03.463 ****** 2026-01-03 00:55:43.149636 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.149642 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.149647 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.149653 | orchestrator | 2026-01-03 00:55:43.149658 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-01-03 00:55:43.149664 | orchestrator | Saturday 03 January 2026 00:55:23 +0000 (0:00:13.074) 0:06:16.538 ****** 2026-01-03 00:55:43.149670 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:55:43.149676 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:55:43.149683 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:55:43.149689 | orchestrator | 2026-01-03 00:55:43.149695 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-01-03 00:55:43.149701 | orchestrator | Saturday 03 January 2026 00:55:24 +0000 (0:00:01.373) 0:06:17.911 ****** 2026-01-03 00:55:43.149707 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:55:43.149713 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:55:43.149720 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:55:43.149732 | orchestrator | 2026-01-03 00:55:43.149736 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-01-03 00:55:43.149740 | orchestrator | Saturday 03 January 2026 00:55:34 +0000 (0:00:09.532) 0:06:27.444 ****** 2026-01-03 00:55:43.149744 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.149748 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.149752 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.149881 | orchestrator | 2026-01-03 00:55:43.149886 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-01-03 00:55:43.149890 | orchestrator | Saturday 03 January 2026 00:55:34 +0000 (0:00:00.375) 0:06:27.820 ****** 2026-01-03 00:55:43.149894 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.149904 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.149908 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.149912 | orchestrator | 2026-01-03 00:55:43.149915 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-01-03 00:55:43.149919 | orchestrator | Saturday 03 January 2026 00:55:34 +0000 (0:00:00.379) 0:06:28.199 ****** 2026-01-03 00:55:43.149923 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.149927 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.149931 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.149936 | orchestrator | 2026-01-03 00:55:43.149943 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-01-03 00:55:43.149949 | orchestrator | Saturday 03 January 2026 00:55:35 +0000 (0:00:00.742) 0:06:28.942 ****** 2026-01-03 00:55:43.149955 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.149961 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.149966 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.149971 | orchestrator | 2026-01-03 00:55:43.149976 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-01-03 00:55:43.149981 | orchestrator | Saturday 03 January 2026 00:55:36 +0000 (0:00:00.375) 0:06:29.318 ****** 2026-01-03 00:55:43.149986 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.149995 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.150003 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.150008 | orchestrator | 2026-01-03 00:55:43.150056 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-01-03 00:55:43.150061 | orchestrator | Saturday 03 January 2026 00:55:36 +0000 (0:00:00.371) 0:06:29.689 ****** 2026-01-03 00:55:43.150065 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:55:43.150069 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:55:43.150073 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:55:43.150076 | orchestrator | 2026-01-03 00:55:43.150080 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-01-03 00:55:43.150084 | orchestrator | Saturday 03 January 2026 00:55:36 +0000 (0:00:00.392) 0:06:30.081 ****** 2026-01-03 00:55:43.150089 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:55:43.150092 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:55:43.150096 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:55:43.150100 | orchestrator | 2026-01-03 00:55:43.150104 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-01-03 00:55:43.150108 | orchestrator | Saturday 03 January 2026 00:55:38 +0000 (0:00:01.406) 0:06:31.488 ****** 2026-01-03 00:55:43.150113 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:55:43.150120 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:55:43.150127 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:55:43.150135 | orchestrator | 2026-01-03 00:55:43.150141 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:55:43.150147 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-03 00:55:43.150154 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-03 00:55:43.150170 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-03 00:55:43.150176 | orchestrator | 2026-01-03 00:55:43.150182 | orchestrator | 2026-01-03 00:55:43.150188 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:55:43.150194 | orchestrator | Saturday 03 January 2026 00:55:39 +0000 (0:00:00.897) 0:06:32.386 ****** 2026-01-03 00:55:43.150201 | orchestrator | =============================================================================== 2026-01-03 00:55:43.150207 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.07s 2026-01-03 00:55:43.150211 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.00s 2026-01-03 00:55:43.150215 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.53s 2026-01-03 00:55:43.150219 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.84s 2026-01-03 00:55:43.150227 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.08s 2026-01-03 00:55:43.150231 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.00s 2026-01-03 00:55:43.150235 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.97s 2026-01-03 00:55:43.150239 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.80s 2026-01-03 00:55:43.150243 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.79s 2026-01-03 00:55:43.150247 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.71s 2026-01-03 00:55:43.150250 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.61s 2026-01-03 00:55:43.150254 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.37s 2026-01-03 00:55:43.150258 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.28s 2026-01-03 00:55:43.150262 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.25s 2026-01-03 00:55:43.150266 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 4.07s 2026-01-03 00:55:43.150269 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.02s 2026-01-03 00:55:43.150273 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.01s 2026-01-03 00:55:43.150277 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.93s 2026-01-03 00:55:43.150281 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.79s 2026-01-03 00:55:43.150285 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.79s 2026-01-03 00:55:43.150293 | orchestrator | 2026-01-03 00:55:43 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:55:43.150298 | orchestrator | 2026-01-03 00:55:43 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:55:43.150301 | orchestrator | 2026-01-03 00:55:43 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:55:43.150305 | orchestrator | 2026-01-03 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:46.195007 | orchestrator | 2026-01-03 00:55:46 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:55:46.196347 | orchestrator | 2026-01-03 00:55:46 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:55:46.197856 | orchestrator | 2026-01-03 00:55:46 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:55:46.197920 | orchestrator | 2026-01-03 00:55:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:49.234166 | orchestrator | 2026-01-03 00:55:49 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:55:49.234428 | orchestrator | 2026-01-03 00:55:49 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:55:49.236777 | orchestrator | 2026-01-03 00:55:49 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:55:49.236824 | orchestrator | 2026-01-03 00:55:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:52.290895 | orchestrator | 2026-01-03 00:55:52 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:55:52.291951 | orchestrator | 2026-01-03 00:55:52 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:55:52.293886 | orchestrator | 2026-01-03 00:55:52 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:55:52.293925 | orchestrator | 2026-01-03 00:55:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:55.336041 | orchestrator | 2026-01-03 00:55:55 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:55:55.336329 | orchestrator | 2026-01-03 00:55:55 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:55:55.337872 | orchestrator | 2026-01-03 00:55:55 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:55:55.337914 | orchestrator | 2026-01-03 00:55:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:55:58.389537 | orchestrator | 2026-01-03 00:55:58 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:55:58.393145 | orchestrator | 2026-01-03 00:55:58 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:55:58.394186 | orchestrator | 2026-01-03 00:55:58 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:55:58.394228 | orchestrator | 2026-01-03 00:55:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:01.447267 | orchestrator | 2026-01-03 00:56:01 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:56:01.447321 | orchestrator | 2026-01-03 00:56:01 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:56:01.448418 | orchestrator | 2026-01-03 00:56:01 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:56:01.448454 | orchestrator | 2026-01-03 00:56:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:04.499059 | orchestrator | 2026-01-03 00:56:04 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:56:04.502337 | orchestrator | 2026-01-03 00:56:04 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:56:04.502387 | orchestrator | 2026-01-03 00:56:04 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:56:04.502393 | orchestrator | 2026-01-03 00:56:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:07.541439 | orchestrator | 2026-01-03 00:56:07 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:56:07.541934 | orchestrator | 2026-01-03 00:56:07 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:56:07.542920 | orchestrator | 2026-01-03 00:56:07 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:56:07.542963 | orchestrator | 2026-01-03 00:56:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:10.602192 | orchestrator | 2026-01-03 00:56:10 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:56:10.602299 | orchestrator | 2026-01-03 00:56:10 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:56:10.603051 | orchestrator | 2026-01-03 00:56:10 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:56:10.603106 | orchestrator | 2026-01-03 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:13.654352 | orchestrator | 2026-01-03 00:56:13 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:56:13.655145 | orchestrator | 2026-01-03 00:56:13 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:56:13.656189 | orchestrator | 2026-01-03 00:56:13 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:56:13.656252 | orchestrator | 2026-01-03 00:56:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:16.700657 | orchestrator | 2026-01-03 00:56:16 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:56:16.702139 | orchestrator | 2026-01-03 00:56:16 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:56:16.703552 | orchestrator | 2026-01-03 00:56:16 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:56:16.703591 | orchestrator | 2026-01-03 00:56:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:19.748486 | orchestrator | 2026-01-03 00:56:19 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:56:19.750632 | orchestrator | 2026-01-03 00:56:19 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:56:19.752315 | orchestrator | 2026-01-03 00:56:19 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:56:19.752351 | orchestrator | 2026-01-03 00:56:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:22.799882 | orchestrator | 2026-01-03 00:56:22 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:56:22.801115 | orchestrator | 2026-01-03 00:56:22 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:56:22.803439 | orchestrator | 2026-01-03 00:56:22 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:56:22.803950 | orchestrator | 2026-01-03 00:56:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:25.847472 | orchestrator | 2026-01-03 00:56:25 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:56:25.848847 | orchestrator | 2026-01-03 00:56:25 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:56:25.851333 | orchestrator | 2026-01-03 00:56:25 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:56:25.851373 | orchestrator | 2026-01-03 00:56:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:28.901100 | orchestrator | 2026-01-03 00:56:28 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:56:28.903628 | orchestrator | 2026-01-03 00:56:28 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:56:28.904883 | orchestrator | 2026-01-03 00:56:28 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:56:28.904918 | orchestrator | 2026-01-03 00:56:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:31.950646 | orchestrator | 2026-01-03 00:56:31 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:56:31.954807 | orchestrator | 2026-01-03 00:56:31 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:56:31.956891 | orchestrator | 2026-01-03 00:56:31 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:56:31.959035 | orchestrator | 2026-01-03 00:56:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:34.997585 | orchestrator | 2026-01-03 00:56:34 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:56:34.998471 | orchestrator | 2026-01-03 00:56:34 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:56:35.001417 | orchestrator | 2026-01-03 00:56:35 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:56:35.001502 | orchestrator | 2026-01-03 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:38.054888 | orchestrator | 2026-01-03 00:56:38 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:56:38.059100 | orchestrator | 2026-01-03 00:56:38 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:56:38.064943 | orchestrator | 2026-01-03 00:56:38 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:56:38.064997 | orchestrator | 2026-01-03 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:41.096382 | orchestrator | 2026-01-03 00:56:41 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:56:41.098229 | orchestrator | 2026-01-03 00:56:41 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:56:41.100059 | orchestrator | 2026-01-03 00:56:41 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:56:41.100103 | orchestrator | 2026-01-03 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:44.144288 | orchestrator | 2026-01-03 00:56:44 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:56:44.147158 | orchestrator | 2026-01-03 00:56:44 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:56:44.148086 | orchestrator | 2026-01-03 00:56:44 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:56:44.148135 | orchestrator | 2026-01-03 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:47.186395 | orchestrator | 2026-01-03 00:56:47 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:56:47.188135 | orchestrator | 2026-01-03 00:56:47 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:56:47.190062 | orchestrator | 2026-01-03 00:56:47 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:56:47.190097 | orchestrator | 2026-01-03 00:56:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:50.253074 | orchestrator | 2026-01-03 00:56:50 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:56:50.256488 | orchestrator | 2026-01-03 00:56:50 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:56:50.258485 | orchestrator | 2026-01-03 00:56:50 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:56:50.258802 | orchestrator | 2026-01-03 00:56:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:53.310818 | orchestrator | 2026-01-03 00:56:53 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:56:53.313153 | orchestrator | 2026-01-03 00:56:53 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:56:53.314688 | orchestrator | 2026-01-03 00:56:53 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:56:53.315043 | orchestrator | 2026-01-03 00:56:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:56.365389 | orchestrator | 2026-01-03 00:56:56 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:56:56.367353 | orchestrator | 2026-01-03 00:56:56 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:56:56.369555 | orchestrator | 2026-01-03 00:56:56 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:56:56.369614 | orchestrator | 2026-01-03 00:56:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:56:59.441613 | orchestrator | 2026-01-03 00:56:59 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:56:59.441727 | orchestrator | 2026-01-03 00:56:59 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:56:59.443583 | orchestrator | 2026-01-03 00:56:59 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:56:59.443627 | orchestrator | 2026-01-03 00:56:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:02.502825 | orchestrator | 2026-01-03 00:57:02 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:57:02.506701 | orchestrator | 2026-01-03 00:57:02 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:57:02.509121 | orchestrator | 2026-01-03 00:57:02 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:57:02.509191 | orchestrator | 2026-01-03 00:57:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:05.561556 | orchestrator | 2026-01-03 00:57:05 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:57:05.564321 | orchestrator | 2026-01-03 00:57:05 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:57:05.566420 | orchestrator | 2026-01-03 00:57:05 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:57:05.567048 | orchestrator | 2026-01-03 00:57:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:08.623480 | orchestrator | 2026-01-03 00:57:08 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:57:08.625412 | orchestrator | 2026-01-03 00:57:08 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:57:08.627549 | orchestrator | 2026-01-03 00:57:08 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:57:08.627998 | orchestrator | 2026-01-03 00:57:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:11.672263 | orchestrator | 2026-01-03 00:57:11 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:57:11.673191 | orchestrator | 2026-01-03 00:57:11 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:57:11.674429 | orchestrator | 2026-01-03 00:57:11 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:57:11.674836 | orchestrator | 2026-01-03 00:57:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:14.728629 | orchestrator | 2026-01-03 00:57:14 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:57:14.730277 | orchestrator | 2026-01-03 00:57:14 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:57:14.732309 | orchestrator | 2026-01-03 00:57:14 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:57:14.732372 | orchestrator | 2026-01-03 00:57:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:17.780633 | orchestrator | 2026-01-03 00:57:17 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:57:17.783406 | orchestrator | 2026-01-03 00:57:17 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:57:17.785758 | orchestrator | 2026-01-03 00:57:17 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:57:17.785922 | orchestrator | 2026-01-03 00:57:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:20.839531 | orchestrator | 2026-01-03 00:57:20 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:57:20.840473 | orchestrator | 2026-01-03 00:57:20 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:57:20.843211 | orchestrator | 2026-01-03 00:57:20 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:57:20.843242 | orchestrator | 2026-01-03 00:57:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:23.895400 | orchestrator | 2026-01-03 00:57:23 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:57:23.898342 | orchestrator | 2026-01-03 00:57:23 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:57:23.900346 | orchestrator | 2026-01-03 00:57:23 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:57:23.900399 | orchestrator | 2026-01-03 00:57:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:26.952378 | orchestrator | 2026-01-03 00:57:26 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:57:26.952456 | orchestrator | 2026-01-03 00:57:26 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:57:26.953500 | orchestrator | 2026-01-03 00:57:26 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:57:26.953537 | orchestrator | 2026-01-03 00:57:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:30.008494 | orchestrator | 2026-01-03 00:57:30 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:57:30.012418 | orchestrator | 2026-01-03 00:57:30 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:57:30.014922 | orchestrator | 2026-01-03 00:57:30 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state STARTED 2026-01-03 00:57:30.015081 | orchestrator | 2026-01-03 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:33.066328 | orchestrator | 2026-01-03 00:57:33 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:57:33.068102 | orchestrator | 2026-01-03 00:57:33 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:57:33.070500 | orchestrator | 2026-01-03 00:57:33 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:57:33.075683 | orchestrator | 2026-01-03 00:57:33 | INFO  | Task 6af73dbe-e794-48b2-985a-184f530e551f is in state SUCCESS 2026-01-03 00:57:33.077615 | orchestrator | 2026-01-03 00:57:33.077674 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-03 00:57:33.077684 | orchestrator | 2.16.14 2026-01-03 00:57:33.077692 | orchestrator | 2026-01-03 00:57:33.077698 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-01-03 00:57:33.077706 | orchestrator | 2026-01-03 00:57:33.077713 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-03 00:57:33.077719 | orchestrator | Saturday 03 January 2026 00:46:21 +0000 (0:00:00.789) 0:00:00.789 ****** 2026-01-03 00:57:33.077727 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.077751 | orchestrator | 2026-01-03 00:57:33.077757 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-03 00:57:33.077764 | orchestrator | Saturday 03 January 2026 00:46:22 +0000 (0:00:01.129) 0:00:01.919 ****** 2026-01-03 00:57:33.077771 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.077776 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.077832 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.077837 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.077873 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.077878 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.077884 | orchestrator | 2026-01-03 00:57:33.077943 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-03 00:57:33.077949 | orchestrator | Saturday 03 January 2026 00:46:24 +0000 (0:00:02.050) 0:00:03.969 ****** 2026-01-03 00:57:33.077953 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.077982 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.077989 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.078097 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.078112 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.078116 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.078120 | orchestrator | 2026-01-03 00:57:33.078124 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-03 00:57:33.078128 | orchestrator | Saturday 03 January 2026 00:46:25 +0000 (0:00:00.888) 0:00:04.858 ****** 2026-01-03 00:57:33.078132 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.078136 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.078140 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.078144 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.078148 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.078151 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.078155 | orchestrator | 2026-01-03 00:57:33.078159 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-03 00:57:33.078163 | orchestrator | Saturday 03 January 2026 00:46:26 +0000 (0:00:00.828) 0:00:05.686 ****** 2026-01-03 00:57:33.078167 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.078171 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.078175 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.078178 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.078182 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.078186 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.078190 | orchestrator | 2026-01-03 00:57:33.078196 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-03 00:57:33.078205 | orchestrator | Saturday 03 January 2026 00:46:27 +0000 (0:00:00.691) 0:00:06.377 ****** 2026-01-03 00:57:33.078213 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.078219 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.078226 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.078232 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.078402 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.078413 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.078417 | orchestrator | 2026-01-03 00:57:33.078429 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-03 00:57:33.078433 | orchestrator | Saturday 03 January 2026 00:46:27 +0000 (0:00:00.484) 0:00:06.862 ****** 2026-01-03 00:57:33.078437 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.078441 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.078445 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.078448 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.078452 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.078456 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.078460 | orchestrator | 2026-01-03 00:57:33.078463 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-03 00:57:33.078468 | orchestrator | Saturday 03 January 2026 00:46:28 +0000 (0:00:01.102) 0:00:07.964 ****** 2026-01-03 00:57:33.078513 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.078518 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.078522 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.078525 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.078529 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.078533 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.078537 | orchestrator | 2026-01-03 00:57:33.078541 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-03 00:57:33.078545 | orchestrator | Saturday 03 January 2026 00:46:29 +0000 (0:00:00.787) 0:00:08.751 ****** 2026-01-03 00:57:33.078548 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.078552 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.078556 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.078560 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.078564 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.078567 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.078571 | orchestrator | 2026-01-03 00:57:33.078575 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-03 00:57:33.078579 | orchestrator | Saturday 03 January 2026 00:46:30 +0000 (0:00:00.890) 0:00:09.642 ****** 2026-01-03 00:57:33.078583 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-03 00:57:33.078587 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-03 00:57:33.078591 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-03 00:57:33.078594 | orchestrator | 2026-01-03 00:57:33.078598 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-03 00:57:33.078602 | orchestrator | Saturday 03 January 2026 00:46:31 +0000 (0:00:00.779) 0:00:10.421 ****** 2026-01-03 00:57:33.078606 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.078610 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.078614 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.078630 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.078814 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.078822 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.078828 | orchestrator | 2026-01-03 00:57:33.078833 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-03 00:57:33.078840 | orchestrator | Saturday 03 January 2026 00:46:32 +0000 (0:00:01.281) 0:00:11.703 ****** 2026-01-03 00:57:33.078846 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-03 00:57:33.078852 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-03 00:57:33.078858 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-03 00:57:33.078863 | orchestrator | 2026-01-03 00:57:33.078869 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-03 00:57:33.078874 | orchestrator | Saturday 03 January 2026 00:46:35 +0000 (0:00:03.065) 0:00:14.769 ****** 2026-01-03 00:57:33.078880 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-03 00:57:33.078886 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-03 00:57:33.078892 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-03 00:57:33.078898 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.078944 | orchestrator | 2026-01-03 00:57:33.078952 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-03 00:57:33.078958 | orchestrator | Saturday 03 January 2026 00:46:36 +0000 (0:00:00.832) 0:00:15.602 ****** 2026-01-03 00:57:33.078965 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.078973 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.078987 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.079032 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.079042 | orchestrator | 2026-01-03 00:57:33.079048 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-03 00:57:33.079054 | orchestrator | Saturday 03 January 2026 00:46:37 +0000 (0:00:00.756) 0:00:16.358 ****** 2026-01-03 00:57:33.079069 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.079077 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.079083 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.079144 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.079150 | orchestrator | 2026-01-03 00:57:33.079154 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-03 00:57:33.079158 | orchestrator | Saturday 03 January 2026 00:46:37 +0000 (0:00:00.698) 0:00:17.056 ****** 2026-01-03 00:57:33.079197 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-03 00:46:33.145855', 'end': '2026-01-03 00:46:33.465577', 'delta': '0:00:00.319722', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.079205 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-03 00:46:34.192520', 'end': '2026-01-03 00:46:34.450821', 'delta': '0:00:00.258301', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.079209 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-03 00:46:35.204503', 'end': '2026-01-03 00:46:35.507943', 'delta': '0:00:00.303440', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.079218 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.079222 | orchestrator | 2026-01-03 00:57:33.079226 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-03 00:57:33.079230 | orchestrator | Saturday 03 January 2026 00:46:38 +0000 (0:00:00.206) 0:00:17.263 ****** 2026-01-03 00:57:33.079233 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.079237 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.079241 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.079245 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.079249 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.079253 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.079256 | orchestrator | 2026-01-03 00:57:33.079260 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-03 00:57:33.079264 | orchestrator | Saturday 03 January 2026 00:46:39 +0000 (0:00:01.468) 0:00:18.732 ****** 2026-01-03 00:57:33.079387 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-03 00:57:33.079398 | orchestrator | 2026-01-03 00:57:33.079405 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-03 00:57:33.079416 | orchestrator | Saturday 03 January 2026 00:46:40 +0000 (0:00:00.730) 0:00:19.462 ****** 2026-01-03 00:57:33.079422 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.079429 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.079435 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.079442 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.079449 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.079456 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.079462 | orchestrator | 2026-01-03 00:57:33.079469 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-03 00:57:33.079475 | orchestrator | Saturday 03 January 2026 00:46:41 +0000 (0:00:01.400) 0:00:20.862 ****** 2026-01-03 00:57:33.079481 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.079488 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.079494 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.079501 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.079507 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.079514 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.079520 | orchestrator | 2026-01-03 00:57:33.079527 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-03 00:57:33.079533 | orchestrator | Saturday 03 January 2026 00:46:43 +0000 (0:00:01.364) 0:00:22.227 ****** 2026-01-03 00:57:33.079540 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.079546 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.079553 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.079559 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.079565 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.079572 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.079578 | orchestrator | 2026-01-03 00:57:33.079584 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-03 00:57:33.079588 | orchestrator | Saturday 03 January 2026 00:46:43 +0000 (0:00:00.803) 0:00:23.031 ****** 2026-01-03 00:57:33.079591 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.079595 | orchestrator | 2026-01-03 00:57:33.079599 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-03 00:57:33.079603 | orchestrator | Saturday 03 January 2026 00:46:44 +0000 (0:00:00.154) 0:00:23.185 ****** 2026-01-03 00:57:33.079649 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.079654 | orchestrator | 2026-01-03 00:57:33.079658 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-03 00:57:33.079662 | orchestrator | Saturday 03 January 2026 00:46:44 +0000 (0:00:00.315) 0:00:23.501 ****** 2026-01-03 00:57:33.079666 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.079670 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.079673 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.079690 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.079694 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.079698 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.079702 | orchestrator | 2026-01-03 00:57:33.079706 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-03 00:57:33.079710 | orchestrator | Saturday 03 January 2026 00:46:45 +0000 (0:00:01.184) 0:00:24.686 ****** 2026-01-03 00:57:33.079713 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.079717 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.079721 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.079725 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.079728 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.079732 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.079826 | orchestrator | 2026-01-03 00:57:33.079831 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-03 00:57:33.079835 | orchestrator | Saturday 03 January 2026 00:46:46 +0000 (0:00:01.025) 0:00:25.711 ****** 2026-01-03 00:57:33.079838 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.079842 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.079846 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.079850 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.079854 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.079858 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.079861 | orchestrator | 2026-01-03 00:57:33.079865 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-03 00:57:33.079869 | orchestrator | Saturday 03 January 2026 00:46:47 +0000 (0:00:00.749) 0:00:26.461 ****** 2026-01-03 00:57:33.079873 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.079877 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.079881 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.079884 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.079888 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.079892 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.079896 | orchestrator | 2026-01-03 00:57:33.079900 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-03 00:57:33.079903 | orchestrator | Saturday 03 January 2026 00:46:48 +0000 (0:00:00.958) 0:00:27.420 ****** 2026-01-03 00:57:33.079907 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.079911 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.079915 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.079919 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.079923 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.079926 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.079930 | orchestrator | 2026-01-03 00:57:33.079934 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-03 00:57:33.079938 | orchestrator | Saturday 03 January 2026 00:46:48 +0000 (0:00:00.671) 0:00:28.091 ****** 2026-01-03 00:57:33.079942 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.079946 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.079949 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.079953 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.079957 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.079961 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.079965 | orchestrator | 2026-01-03 00:57:33.079969 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-03 00:57:33.079976 | orchestrator | Saturday 03 January 2026 00:46:49 +0000 (0:00:01.011) 0:00:29.102 ****** 2026-01-03 00:57:33.079980 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.079984 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.079987 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.079991 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.080014 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.080021 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.080027 | orchestrator | 2026-01-03 00:57:33.080031 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-03 00:57:33.080035 | orchestrator | Saturday 03 January 2026 00:46:50 +0000 (0:00:00.905) 0:00:30.007 ****** 2026-01-03 00:57:33.080040 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--147f94e4--6564--5421--8ac2--dc0697a6d722-osd--block--147f94e4--6564--5421--8ac2--dc0697a6d722', 'dm-uuid-LVM-VugVWX0xLFMxWH3ZLd8vaBvk7vZ2V2buA0HTw5gwHTF0naug4r1MkKve5QW6RixC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080045 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--43909478--d18c--58e7--896e--8d0e3e550915-osd--block--43909478--d18c--58e7--896e--8d0e3e550915', 'dm-uuid-LVM-FVkTKVtS7jWHNPhNvjzCqcUnVH85HKsJj4Q1k4st1cUj1pSVsQIzOk6QwEbwnq3t'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080063 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080073 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080098 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080284 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36', 'scsi-SQEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part1', 'scsi-SQEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part14', 'scsi-SQEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part15', 'scsi-SQEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part16', 'scsi-SQEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:57:33.080298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--124077fc--a709--5275--a3b4--8defea20aa20-osd--block--124077fc--a709--5275--a3b4--8defea20aa20', 'dm-uuid-LVM-U2TEeftFf5xZXlCo0y92bsW3URepmFeBXAPfH071IwwGzLi5nNcl8XZwFDHpfI9x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080318 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--43153f84--c643--5017--9328--2bdcf330b780-osd--block--43153f84--c643--5017--9328--2bdcf330b780', 'dm-uuid-LVM-Z4AbDkyKPDUKoreVahmAvvrYi0XeSRDNay6MC5Whtl4BZWLLAoaAKysVy8GjWJLt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f97db499--9f50--5724--b4de--324784fab4ab-osd--block--f97db499--9f50--5724--b4de--324784fab4ab', 'dm-uuid-LVM-ndWslIiwf71ppOn2x6PIQ7ad3SX2au6xrHhVoHdwaukJYuY2LCSJQQc8XYE8M5hH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--147f94e4--6564--5421--8ac2--dc0697a6d722-osd--block--147f94e4--6564--5421--8ac2--dc0697a6d722'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kxIVcu-OJkD-rOZq-i7F3-Q4SP-FZlO-Vqy1gX', 'scsi-0QEMU_QEMU_HARDDISK_87925512-ad51-4d26-92cc-5f354ec37d18', 'scsi-SQEMU_QEMU_HARDDISK_87925512-ad51-4d26-92cc-5f354ec37d18'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:57:33.080431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080443 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--293f14c0--405b--5b3a--a5c8--f3b182003048-osd--block--293f14c0--405b--5b3a--a5c8--f3b182003048', 'dm-uuid-LVM-vHFp7zBiIjDYUVrHy51ObCTIcOMIAAApdPtk289GqEZ1R1LWrnrb1JanU7nRdTxY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080469 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--43909478--d18c--58e7--896e--8d0e3e550915-osd--block--43909478--d18c--58e7--896e--8d0e3e550915'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V2wRCC-19G3-RH1r-a0C6-3PRc-2ZMZ-n8RGBh', 'scsi-0QEMU_QEMU_HARDDISK_5af48d4e-a5d9-4c77-9873-39f930691ccf', 'scsi-SQEMU_QEMU_HARDDISK_5af48d4e-a5d9-4c77-9873-39f930691ccf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:57:33.080479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080492 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080557 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080575 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b3b8b78-42ce-4774-a4e8-f10424aa2bf1', 'scsi-SQEMU_QEMU_HARDDISK_0b3b8b78-42ce-4774-a4e8-f10424aa2bf1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:57:33.080582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c9b7723-665d-4a24-8fea-45de865b62a8', 'scsi-SQEMU_QEMU_HARDDISK_1c9b7723-665d-4a24-8fea-45de865b62a8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c9b7723-665d-4a24-8fea-45de865b62a8-part1', 'scsi-SQEMU_QEMU_HARDDISK_1c9b7723-665d-4a24-8fea-45de865b62a8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c9b7723-665d-4a24-8fea-45de865b62a8-part14', 'scsi-SQEMU_QEMU_HARDDISK_1c9b7723-665d-4a24-8fea-45de865b62a8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c9b7723-665d-4a24-8fea-45de865b62a8-part15', 'scsi-SQEMU_QEMU_HARDDISK_1c9b7723-665d-4a24-8fea-45de865b62a8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c9b7723-665d-4a24-8fea-45de865b62a8-part16', 'scsi-SQEMU_QEMU_HARDDISK_1c9b7723-665d-4a24-8fea-45de865b62a8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:57:33.080666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:57:33.080683 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080690 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080697 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.080704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:57:33.080734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080751 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080758 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080767 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca', 'scsi-SQEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part1', 'scsi-SQEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part14', 'scsi-SQEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part15', 'scsi-SQEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part16', 'scsi-SQEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:57:33.080823 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--124077fc--a709--5275--a3b4--8defea20aa20-osd--block--124077fc--a709--5275--a3b4--8defea20aa20'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AYb26I-Z0TP-zaBQ-KZeQ-o9xr-ugwk-L4kNfq', 'scsi-0QEMU_QEMU_HARDDISK_5710b3c7-cfac-4b3c-9149-eeb74f32a79f', 'scsi-SQEMU_QEMU_HARDDISK_5710b3c7-cfac-4b3c-9149-eeb74f32a79f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:57:33.080829 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.080833 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--43153f84--c643--5017--9328--2bdcf330b780-osd--block--43153f84--c643--5017--9328--2bdcf330b780'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-O1Nw5l-Gl6i-Mupu-1XpS-ZTPb-TeOH-HFVJaH', 'scsi-0QEMU_QEMU_HARDDISK_d4e0be62-f642-4458-ac3b-093009378a3c', 'scsi-SQEMU_QEMU_HARDDISK_d4e0be62-f642-4458-ac3b-093009378a3c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:57:33.080840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55b36885-901f-4155-8165-44f8903e4943', 'scsi-SQEMU_QEMU_HARDDISK_55b36885-901f-4155-8165-44f8903e4943'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:57:33.080848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080852 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:57:33.080866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080876 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f', 'scsi-SQEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part1', 'scsi-SQEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part14', 'scsi-SQEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part15', 'scsi-SQEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part16', 'scsi-SQEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:57:33.080881 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.080885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f97db499--9f50--5724--b4de--324784fab4ab-osd--block--f97db499--9f50--5724--b4de--324784fab4ab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Jop0PX-Eq53-g3X6-3Znd-ivQs-Y4IJ-HbxiEU', 'scsi-0QEMU_QEMU_HARDDISK_38f41ea3-c1f8-47b6-a316-62c713a7ab6d', 'scsi-SQEMU_QEMU_HARDDISK_38f41ea3-c1f8-47b6-a316-62c713a7ab6d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:57:33.080949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.080966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a696006f-841d-4b4f-91fe-873e20a4fba1', 'scsi-SQEMU_QEMU_HARDDISK_a696006f-841d-4b4f-91fe-873e20a4fba1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a696006f-841d-4b4f-91fe-873e20a4fba1-part1', 'scsi-SQEMU_QEMU_HARDDISK_a696006f-841d-4b4f-91fe-873e20a4fba1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a696006f-841d-4b4f-91fe-873e20a4fba1-part14', 'scsi-SQEMU_QEMU_HARDDISK_a696006f-841d-4b4f-91fe-873e20a4fba1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a696006f-841d-4b4f-91fe-873e20a4fba1-part15', 'scsi-SQEMU_QEMU_HARDDISK_a696006f-841d-4b4f-91fe-873e20a4fba1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a696006f-841d-4b4f-91fe-873e20a4fba1-part16', 'scsi-SQEMU_QEMU_HARDDISK_a696006f-841d-4b4f-91fe-873e20a4fba1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:57:33.080982 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--293f14c0--405b--5b3a--a5c8--f3b182003048-osd--block--293f14c0--405b--5b3a--a5c8--f3b182003048'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-K3GfH4-OdAN-3DHR-yzja-SS13-wYuw-r2xFkO', 'scsi-0QEMU_QEMU_HARDDISK_65a14f46-a6f7-4da8-aafc-46a47f969b79', 'scsi-SQEMU_QEMU_HARDDISK_65a14f46-a6f7-4da8-aafc-46a47f969b79'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:57:33.080989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:57:33.081023 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.081029 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5f15048-4e23-4094-a7eb-216bc02a3879', 'scsi-SQEMU_QEMU_HARDDISK_e5f15048-4e23-4094-a7eb-216bc02a3879'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:57:33.081033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.081039 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:57:33.081043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.081047 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.081051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.081055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.081479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.081492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.081496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.081500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:57:33.081510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c877c24-9ed7-4cc9-a1d6-11e6343c504c', 'scsi-SQEMU_QEMU_HARDDISK_5c877c24-9ed7-4cc9-a1d6-11e6343c504c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c877c24-9ed7-4cc9-a1d6-11e6343c504c-part1', 'scsi-SQEMU_QEMU_HARDDISK_5c877c24-9ed7-4cc9-a1d6-11e6343c504c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c877c24-9ed7-4cc9-a1d6-11e6343c504c-part14', 'scsi-SQEMU_QEMU_HARDDISK_5c877c24-9ed7-4cc9-a1d6-11e6343c504c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c877c24-9ed7-4cc9-a1d6-11e6343c504c-part15', 'scsi-SQEMU_QEMU_HARDDISK_5c877c24-9ed7-4cc9-a1d6-11e6343c504c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c877c24-9ed7-4cc9-a1d6-11e6343c504c-part16', 'scsi-SQEMU_QEMU_HARDDISK_5c877c24-9ed7-4cc9-a1d6-11e6343c504c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:57:33.081546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:57:33.081553 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.081557 | orchestrator | 2026-01-03 00:57:33.081561 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-03 00:57:33.081565 | orchestrator | Saturday 03 January 2026 00:46:52 +0000 (0:00:01.560) 0:00:31.567 ****** 2026-01-03 00:57:33.081570 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--147f94e4--6564--5421--8ac2--dc0697a6d722-osd--block--147f94e4--6564--5421--8ac2--dc0697a6d722', 'dm-uuid-LVM-VugVWX0xLFMxWH3ZLd8vaBvk7vZ2V2buA0HTw5gwHTF0naug4r1MkKve5QW6RixC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.081575 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--43909478--d18c--58e7--896e--8d0e3e550915-osd--block--43909478--d18c--58e7--896e--8d0e3e550915', 'dm-uuid-LVM-FVkTKVtS7jWHNPhNvjzCqcUnVH85HKsJj4Q1k4st1cUj1pSVsQIzOk6QwEbwnq3t'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.081584 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.081589 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.081593 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.081622 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'id2026-01-03 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:33.081629 | orchestrator | s': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.081633 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.081664 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.081670 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.081677 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.081705 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36', 'scsi-SQEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part1', 'scsi-SQEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part14', 'scsi-SQEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part15', 'scsi-SQEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part16', 'scsi-SQEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.081715 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--147f94e4--6564--5421--8ac2--dc0697a6d722-osd--block--147f94e4--6564--5421--8ac2--dc0697a6d722'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kxIVcu-OJkD-rOZq-i7F3-Q4SP-FZlO-Vqy1gX', 'scsi-0QEMU_QEMU_HARDDISK_87925512-ad51-4d26-92cc-5f354ec37d18', 'scsi-SQEMU_QEMU_HARDDISK_87925512-ad51-4d26-92cc-5f354ec37d18'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.081863 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--43909478--d18c--58e7--896e--8d0e3e550915-osd--block--43909478--d18c--58e7--896e--8d0e3e550915'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V2wRCC-19G3-RH1r-a0C6-3PRc-2ZMZ-n8RGBh', 'scsi-0QEMU_QEMU_HARDDISK_5af48d4e-a5d9-4c77-9873-39f930691ccf', 'scsi-SQEMU_QEMU_HARDDISK_5af48d4e-a5d9-4c77-9873-39f930691ccf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.081922 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b3b8b78-42ce-4774-a4e8-f10424aa2bf1', 'scsi-SQEMU_QEMU_HARDDISK_0b3b8b78-42ce-4774-a4e8-f10424aa2bf1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082043 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082059 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f97db499--9f50--5724--b4de--324784fab4ab-osd--block--f97db499--9f50--5724--b4de--324784fab4ab', 'dm-uuid-LVM-ndWslIiwf71ppOn2x6PIQ7ad3SX2au6xrHhVoHdwaukJYuY2LCSJQQc8XYE8M5hH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082066 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--293f14c0--405b--5b3a--a5c8--f3b182003048-osd--block--293f14c0--405b--5b3a--a5c8--f3b182003048', 'dm-uuid-LVM-vHFp7zBiIjDYUVrHy51ObCTIcOMIAAApdPtk289GqEZ1R1LWrnrb1JanU7nRdTxY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082088 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082097 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082110 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--124077fc--a709--5275--a3b4--8defea20aa20-osd--block--124077fc--a709--5275--a3b4--8defea20aa20', 'dm-uuid-LVM-U2TEeftFf5xZXlCo0y92bsW3URepmFeBXAPfH071IwwGzLi5nNcl8XZwFDHpfI9x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082166 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082176 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--43153f84--c643--5017--9328--2bdcf330b780-osd--block--43153f84--c643--5017--9328--2bdcf330b780', 'dm-uuid-LVM-Z4AbDkyKPDUKoreVahmAvvrYi0XeSRDNay6MC5Whtl4BZWLLAoaAKysVy8GjWJLt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082183 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082206 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082214 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082226 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082271 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082279 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082286 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082292 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082302 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082313 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.082320 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082365 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f', 'scsi-SQEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part1', 'scsi-SQEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part14', 'scsi-SQEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part15', 'scsi-SQEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part16', 'scsi-SQEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082381 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f97db499--9f50--5724--b4de--324784fab4ab-osd--block--f97db499--9f50--5724--b4de--324784fab4ab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Jop0PX-Eq53-g3X6-3Znd-ivQs-Y4IJ-HbxiEU', 'scsi-0QEMU_QEMU_HARDDISK_38f41ea3-c1f8-47b6-a316-62c713a7ab6d', 'scsi-SQEMU_QEMU_HARDDISK_38f41ea3-c1f8-47b6-a316-62c713a7ab6d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082395 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082437 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--293f14c0--405b--5b3a--a5c8--f3b182003048-osd--block--293f14c0--405b--5b3a--a5c8--f3b182003048'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-K3GfH4-OdAN-3DHR-yzja-SS13-wYuw-r2xFkO', 'scsi-0QEMU_QEMU_HARDDISK_65a14f46-a6f7-4da8-aafc-46a47f969b79', 'scsi-SQEMU_QEMU_HARDDISK_65a14f46-a6f7-4da8-aafc-46a47f969b79'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082445 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082452 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5f15048-4e23-4094-a7eb-216bc02a3879', 'scsi-SQEMU_QEMU_HARDDISK_e5f15048-4e23-4094-a7eb-216bc02a3879'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082459 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082473 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082519 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca', 'scsi-SQEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part1', 'scsi-SQEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part14', 'scsi-SQEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part15', 'scsi-SQEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part16', 'scsi-SQEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082530 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082587 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082604 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082611 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--124077fc--a709--5275--a3b4--8defea20aa20-osd--block--124077fc--a709--5275--a3b4--8defea20aa20'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AYb26I-Z0TP-zaBQ-KZeQ-o9xr-ugwk-L4kNfq', 'scsi-0QEMU_QEMU_HARDDISK_5710b3c7-cfac-4b3c-9149-eeb74f32a79f', 'scsi-SQEMU_QEMU_HARDDISK_5710b3c7-cfac-4b3c-9149-eeb74f32a79f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082658 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082666 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082670 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082676 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--43153f84--c643--5017--9328--2bdcf330b780-osd--block--43153f84--c643--5017--9328--2bdcf330b780'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-O1Nw5l-Gl6i-Mupu-1XpS-ZTPb-TeOH-HFVJaH', 'scsi-0QEMU_QEMU_HARDDISK_d4e0be62-f642-4458-ac3b-093009378a3c', 'scsi-SQEMU_QEMU_HARDDISK_d4e0be62-f642-4458-ac3b-093009378a3c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082684 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082688 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082716 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55b36885-901f-4155-8165-44f8903e4943', 'scsi-SQEMU_QEMU_HARDDISK_55b36885-901f-4155-8165-44f8903e4943'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082725 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c9b7723-665d-4a24-8fea-45de865b62a8', 'scsi-SQEMU_QEMU_HARDDISK_1c9b7723-665d-4a24-8fea-45de865b62a8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c9b7723-665d-4a24-8fea-45de865b62a8-part1', 'scsi-SQEMU_QEMU_HARDDISK_1c9b7723-665d-4a24-8fea-45de865b62a8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c9b7723-665d-4a24-8fea-45de865b62a8-part14', 'scsi-SQEMU_QEMU_HARDDISK_1c9b7723-665d-4a24-8fea-45de865b62a8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c9b7723-665d-4a24-8fea-45de865b62a8-part15', 'scsi-SQEMU_QEMU_HARDDISK_1c9b7723-665d-4a24-8fea-45de865b62a8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c9b7723-665d-4a24-8fea-45de865b62a8-part16', 'scsi-SQEMU_QEMU_HARDDISK_1c9b7723-665d-4a24-8fea-45de865b62a8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082732 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082759 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082765 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.082769 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082773 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082777 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082785 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082790 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082794 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082821 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082826 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082830 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.082837 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a696006f-841d-4b4f-91fe-873e20a4fba1', 'scsi-SQEMU_QEMU_HARDDISK_a696006f-841d-4b4f-91fe-873e20a4fba1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a696006f-841d-4b4f-91fe-873e20a4fba1-part1', 'scsi-SQEMU_QEMU_HARDDISK_a696006f-841d-4b4f-91fe-873e20a4fba1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a696006f-841d-4b4f-91fe-873e20a4fba1-part14', 'scsi-SQEMU_QEMU_HARDDISK_a696006f-841d-4b4f-91fe-873e20a4fba1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a696006f-841d-4b4f-91fe-873e20a4fba1-part15', 'scsi-SQEMU_QEMU_HARDDISK_a696006f-841d-4b4f-91fe-873e20a4fba1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a696006f-841d-4b4f-91fe-873e20a4fba1-part16', 'scsi-SQEMU_QEMU_HARDDISK_a696006f-841d-4b4f-91fe-873e20a4fba1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082845 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.082872 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082877 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.082881 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082885 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082893 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082898 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082903 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082906 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082934 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082948 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082958 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c877c24-9ed7-4cc9-a1d6-11e6343c504c', 'scsi-SQEMU_QEMU_HARDDISK_5c877c24-9ed7-4cc9-a1d6-11e6343c504c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c877c24-9ed7-4cc9-a1d6-11e6343c504c-part1', 'scsi-SQEMU_QEMU_HARDDISK_5c877c24-9ed7-4cc9-a1d6-11e6343c504c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c877c24-9ed7-4cc9-a1d6-11e6343c504c-part14', 'scsi-SQEMU_QEMU_HARDDISK_5c877c24-9ed7-4cc9-a1d6-11e6343c504c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c877c24-9ed7-4cc9-a1d6-11e6343c504c-part15', 'scsi-SQEMU_QEMU_HARDDISK_5c877c24-9ed7-4cc9-a1d6-11e6343c504c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c877c24-9ed7-4cc9-a1d6-11e6343c504c-part16', 'scsi-SQEMU_QEMU_HARDDISK_5c877c24-9ed7-4cc9-a1d6-11e6343c504c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.082969 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:57:33.083018 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.083026 | orchestrator | 2026-01-03 00:57:33.083033 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-03 00:57:33.083040 | orchestrator | Saturday 03 January 2026 00:46:53 +0000 (0:00:01.209) 0:00:32.777 ****** 2026-01-03 00:57:33.083045 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.083049 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.083053 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.083057 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.083060 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.083064 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.083068 | orchestrator | 2026-01-03 00:57:33.083082 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-03 00:57:33.083086 | orchestrator | Saturday 03 January 2026 00:46:54 +0000 (0:00:01.271) 0:00:34.048 ****** 2026-01-03 00:57:33.083089 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.083093 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.083101 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.083104 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.083108 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.083112 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.083116 | orchestrator | 2026-01-03 00:57:33.083120 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-03 00:57:33.083124 | orchestrator | Saturday 03 January 2026 00:46:55 +0000 (0:00:00.573) 0:00:34.622 ****** 2026-01-03 00:57:33.083127 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.083131 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.083135 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.083139 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.083143 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.083146 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.083150 | orchestrator | 2026-01-03 00:57:33.083154 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-03 00:57:33.083158 | orchestrator | Saturday 03 January 2026 00:46:56 +0000 (0:00:00.673) 0:00:35.296 ****** 2026-01-03 00:57:33.083162 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.083170 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.083174 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.083177 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.083181 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.083185 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.083189 | orchestrator | 2026-01-03 00:57:33.083192 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-03 00:57:33.083196 | orchestrator | Saturday 03 January 2026 00:46:56 +0000 (0:00:00.553) 0:00:35.850 ****** 2026-01-03 00:57:33.083200 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.083204 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.083208 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.083211 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.083215 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.083219 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.083223 | orchestrator | 2026-01-03 00:57:33.083227 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-03 00:57:33.083230 | orchestrator | Saturday 03 January 2026 00:46:57 +0000 (0:00:00.771) 0:00:36.621 ****** 2026-01-03 00:57:33.083234 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.083238 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.083242 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.083245 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.083249 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.083253 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.083257 | orchestrator | 2026-01-03 00:57:33.083268 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-03 00:57:33.083272 | orchestrator | Saturday 03 January 2026 00:46:58 +0000 (0:00:00.667) 0:00:37.289 ****** 2026-01-03 00:57:33.083276 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-03 00:57:33.083280 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-03 00:57:33.083283 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-03 00:57:33.083287 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-03 00:57:33.083291 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-03 00:57:33.083295 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-03 00:57:33.083299 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-03 00:57:33.083302 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-03 00:57:33.083306 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-03 00:57:33.083310 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-03 00:57:33.083314 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-03 00:57:33.083318 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-03 00:57:33.083324 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-03 00:57:33.083328 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-03 00:57:33.083332 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-03 00:57:33.083335 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-03 00:57:33.083339 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-03 00:57:33.083343 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-03 00:57:33.083347 | orchestrator | 2026-01-03 00:57:33.083351 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-03 00:57:33.083355 | orchestrator | Saturday 03 January 2026 00:47:00 +0000 (0:00:02.436) 0:00:39.726 ****** 2026-01-03 00:57:33.083358 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-03 00:57:33.083362 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-03 00:57:33.083366 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-03 00:57:33.083370 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.083374 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-03 00:57:33.083378 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-03 00:57:33.083382 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-03 00:57:33.083413 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.083418 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-03 00:57:33.083422 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-03 00:57:33.083426 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-03 00:57:33.083430 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.083433 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-03 00:57:33.083437 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-03 00:57:33.083441 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-03 00:57:33.083445 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-03 00:57:33.083448 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-03 00:57:33.083452 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-03 00:57:33.083456 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.083460 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.083464 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-03 00:57:33.083467 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-03 00:57:33.083471 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-03 00:57:33.083475 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.083479 | orchestrator | 2026-01-03 00:57:33.083483 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-03 00:57:33.083486 | orchestrator | Saturday 03 January 2026 00:47:01 +0000 (0:00:00.635) 0:00:40.361 ****** 2026-01-03 00:57:33.083490 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.083494 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.083498 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.083502 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.083505 | orchestrator | 2026-01-03 00:57:33.083509 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-03 00:57:33.083514 | orchestrator | Saturday 03 January 2026 00:47:02 +0000 (0:00:01.536) 0:00:41.897 ****** 2026-01-03 00:57:33.083518 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.083521 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.083525 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.083529 | orchestrator | 2026-01-03 00:57:33.083533 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-03 00:57:33.083540 | orchestrator | Saturday 03 January 2026 00:47:03 +0000 (0:00:00.451) 0:00:42.349 ****** 2026-01-03 00:57:33.083544 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.083548 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.083552 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.083556 | orchestrator | 2026-01-03 00:57:33.083559 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-03 00:57:33.083563 | orchestrator | Saturday 03 January 2026 00:47:03 +0000 (0:00:00.626) 0:00:42.975 ****** 2026-01-03 00:57:33.083567 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.083571 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.083574 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.083578 | orchestrator | 2026-01-03 00:57:33.083582 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-03 00:57:33.083588 | orchestrator | Saturday 03 January 2026 00:47:04 +0000 (0:00:00.961) 0:00:43.937 ****** 2026-01-03 00:57:33.083592 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.083596 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.083599 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.083603 | orchestrator | 2026-01-03 00:57:33.083607 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-03 00:57:33.083611 | orchestrator | Saturday 03 January 2026 00:47:05 +0000 (0:00:00.789) 0:00:44.726 ****** 2026-01-03 00:57:33.083615 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:57:33.083618 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:57:33.083622 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:57:33.083626 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.083630 | orchestrator | 2026-01-03 00:57:33.083633 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-03 00:57:33.083637 | orchestrator | Saturday 03 January 2026 00:47:06 +0000 (0:00:00.700) 0:00:45.427 ****** 2026-01-03 00:57:33.083641 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:57:33.083645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:57:33.083649 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:57:33.083652 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.083656 | orchestrator | 2026-01-03 00:57:33.083660 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-03 00:57:33.083664 | orchestrator | Saturday 03 January 2026 00:47:06 +0000 (0:00:00.358) 0:00:45.785 ****** 2026-01-03 00:57:33.083667 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:57:33.083671 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:57:33.083675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:57:33.083679 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.083683 | orchestrator | 2026-01-03 00:57:33.083686 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-03 00:57:33.083690 | orchestrator | Saturday 03 January 2026 00:47:07 +0000 (0:00:00.501) 0:00:46.287 ****** 2026-01-03 00:57:33.083694 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.083698 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.083702 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.083705 | orchestrator | 2026-01-03 00:57:33.083709 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-03 00:57:33.083713 | orchestrator | Saturday 03 January 2026 00:47:07 +0000 (0:00:00.459) 0:00:46.746 ****** 2026-01-03 00:57:33.083727 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-03 00:57:33.083731 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-03 00:57:33.083735 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-03 00:57:33.083739 | orchestrator | 2026-01-03 00:57:33.083743 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-03 00:57:33.083747 | orchestrator | Saturday 03 January 2026 00:47:08 +0000 (0:00:01.173) 0:00:47.920 ****** 2026-01-03 00:57:33.083753 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-03 00:57:33.083757 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-03 00:57:33.083761 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-03 00:57:33.083765 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-03 00:57:33.083769 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-03 00:57:33.083772 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-03 00:57:33.083776 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-03 00:57:33.083780 | orchestrator | 2026-01-03 00:57:33.083784 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-03 00:57:33.083788 | orchestrator | Saturday 03 January 2026 00:47:09 +0000 (0:00:00.737) 0:00:48.657 ****** 2026-01-03 00:57:33.083791 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-03 00:57:33.083795 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-03 00:57:33.083799 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-03 00:57:33.083803 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-03 00:57:33.083806 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-03 00:57:33.083810 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-03 00:57:33.083814 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-03 00:57:33.083818 | orchestrator | 2026-01-03 00:57:33.083822 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-03 00:57:33.083825 | orchestrator | Saturday 03 January 2026 00:47:11 +0000 (0:00:02.041) 0:00:50.699 ****** 2026-01-03 00:57:33.083830 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.083834 | orchestrator | 2026-01-03 00:57:33.083838 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-03 00:57:33.083841 | orchestrator | Saturday 03 January 2026 00:47:12 +0000 (0:00:01.191) 0:00:51.890 ****** 2026-01-03 00:57:33.083845 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.083849 | orchestrator | 2026-01-03 00:57:33.083853 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-03 00:57:33.083857 | orchestrator | Saturday 03 January 2026 00:47:13 +0000 (0:00:01.236) 0:00:53.126 ****** 2026-01-03 00:57:33.083860 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.083864 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.083868 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.083872 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.083876 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.083880 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.083883 | orchestrator | 2026-01-03 00:57:33.083887 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-03 00:57:33.083891 | orchestrator | Saturday 03 January 2026 00:47:15 +0000 (0:00:01.443) 0:00:54.570 ****** 2026-01-03 00:57:33.083895 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.083898 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.083902 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.083906 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.083910 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.083913 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.083920 | orchestrator | 2026-01-03 00:57:33.083923 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-03 00:57:33.083927 | orchestrator | Saturday 03 January 2026 00:47:16 +0000 (0:00:00.992) 0:00:55.562 ****** 2026-01-03 00:57:33.083931 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.083935 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.083939 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.083942 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.083946 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.083950 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.083954 | orchestrator | 2026-01-03 00:57:33.083972 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-03 00:57:33.083976 | orchestrator | Saturday 03 January 2026 00:47:17 +0000 (0:00:00.824) 0:00:56.387 ****** 2026-01-03 00:57:33.083980 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.083984 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.083988 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.083991 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.084010 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.084016 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.084023 | orchestrator | 2026-01-03 00:57:33.084030 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-03 00:57:33.084036 | orchestrator | Saturday 03 January 2026 00:47:17 +0000 (0:00:00.729) 0:00:57.116 ****** 2026-01-03 00:57:33.084042 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.084048 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.084064 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.084069 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.084074 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.084078 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.084082 | orchestrator | 2026-01-03 00:57:33.084087 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-03 00:57:33.084091 | orchestrator | Saturday 03 January 2026 00:47:19 +0000 (0:00:01.231) 0:00:58.348 ****** 2026-01-03 00:57:33.084096 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.084101 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.084105 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.084110 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.084114 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.084119 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.084123 | orchestrator | 2026-01-03 00:57:33.084128 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-03 00:57:33.084132 | orchestrator | Saturday 03 January 2026 00:47:20 +0000 (0:00:00.796) 0:00:59.144 ****** 2026-01-03 00:57:33.084137 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.084141 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.084146 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.084150 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.084154 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.084159 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.084164 | orchestrator | 2026-01-03 00:57:33.084168 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-03 00:57:33.084173 | orchestrator | Saturday 03 January 2026 00:47:20 +0000 (0:00:00.788) 0:00:59.932 ****** 2026-01-03 00:57:33.084177 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.084182 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.084187 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.084191 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.084195 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.084200 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.084204 | orchestrator | 2026-01-03 00:57:33.084209 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-03 00:57:33.084213 | orchestrator | Saturday 03 January 2026 00:47:21 +0000 (0:00:01.011) 0:01:00.944 ****** 2026-01-03 00:57:33.084221 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.084225 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.084230 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.084234 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.084239 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.084243 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.084248 | orchestrator | 2026-01-03 00:57:33.084252 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-03 00:57:33.084257 | orchestrator | Saturday 03 January 2026 00:47:23 +0000 (0:00:01.347) 0:01:02.292 ****** 2026-01-03 00:57:33.084261 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.084266 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.084270 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.084275 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.084279 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.084283 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.084288 | orchestrator | 2026-01-03 00:57:33.084292 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-03 00:57:33.084297 | orchestrator | Saturday 03 January 2026 00:47:24 +0000 (0:00:00.928) 0:01:03.220 ****** 2026-01-03 00:57:33.084301 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.084306 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.084310 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.084315 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.084319 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.084326 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.084330 | orchestrator | 2026-01-03 00:57:33.084335 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-03 00:57:33.084339 | orchestrator | Saturday 03 January 2026 00:47:25 +0000 (0:00:01.248) 0:01:04.468 ****** 2026-01-03 00:57:33.084344 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.084348 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.084353 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.084357 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.084361 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.084366 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.084370 | orchestrator | 2026-01-03 00:57:33.084374 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-03 00:57:33.084379 | orchestrator | Saturday 03 January 2026 00:47:27 +0000 (0:00:01.691) 0:01:06.159 ****** 2026-01-03 00:57:33.084384 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.084388 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.084392 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.084397 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.084402 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.084406 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.084411 | orchestrator | 2026-01-03 00:57:33.084415 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-03 00:57:33.084420 | orchestrator | Saturday 03 January 2026 00:47:28 +0000 (0:00:01.194) 0:01:07.353 ****** 2026-01-03 00:57:33.084424 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.084429 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.084432 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.084436 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.084440 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.084444 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.084447 | orchestrator | 2026-01-03 00:57:33.084451 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-03 00:57:33.084455 | orchestrator | Saturday 03 January 2026 00:47:28 +0000 (0:00:00.670) 0:01:08.024 ****** 2026-01-03 00:57:33.084459 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.084463 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.084466 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.084470 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.084474 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.084480 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.084484 | orchestrator | 2026-01-03 00:57:33.084488 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-03 00:57:33.084492 | orchestrator | Saturday 03 January 2026 00:47:29 +0000 (0:00:01.008) 0:01:09.033 ****** 2026-01-03 00:57:33.084496 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.084509 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.084515 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.084521 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.084531 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.084538 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.084544 | orchestrator | 2026-01-03 00:57:33.084551 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-03 00:57:33.084557 | orchestrator | Saturday 03 January 2026 00:47:30 +0000 (0:00:00.906) 0:01:09.939 ****** 2026-01-03 00:57:33.084562 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.084568 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.084574 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.084580 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.084585 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.084591 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.084597 | orchestrator | 2026-01-03 00:57:33.084604 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-03 00:57:33.084609 | orchestrator | Saturday 03 January 2026 00:47:31 +0000 (0:00:00.987) 0:01:10.927 ****** 2026-01-03 00:57:33.084615 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.084621 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.084628 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.084634 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.084641 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.084647 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.084653 | orchestrator | 2026-01-03 00:57:33.084660 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-03 00:57:33.084666 | orchestrator | Saturday 03 January 2026 00:47:32 +0000 (0:00:00.908) 0:01:11.835 ****** 2026-01-03 00:57:33.084673 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.084680 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.084687 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.084693 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.084700 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.084706 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.084713 | orchestrator | 2026-01-03 00:57:33.084720 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-03 00:57:33.084726 | orchestrator | Saturday 03 January 2026 00:47:34 +0000 (0:00:01.607) 0:01:13.443 ****** 2026-01-03 00:57:33.084733 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.084740 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.084746 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.084752 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.084759 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.084766 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.084773 | orchestrator | 2026-01-03 00:57:33.084779 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-03 00:57:33.084786 | orchestrator | Saturday 03 January 2026 00:47:35 +0000 (0:00:01.601) 0:01:15.045 ****** 2026-01-03 00:57:33.084793 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.084799 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.084806 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.084813 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.084820 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.084826 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.084833 | orchestrator | 2026-01-03 00:57:33.084839 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-03 00:57:33.084845 | orchestrator | Saturday 03 January 2026 00:47:38 +0000 (0:00:02.935) 0:01:17.980 ****** 2026-01-03 00:57:33.084861 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.084869 | orchestrator | 2026-01-03 00:57:33.084876 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-03 00:57:33.084882 | orchestrator | Saturday 03 January 2026 00:47:40 +0000 (0:00:01.343) 0:01:19.324 ****** 2026-01-03 00:57:33.084889 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.084896 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.084903 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.084910 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.084917 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.084923 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.084929 | orchestrator | 2026-01-03 00:57:33.084936 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-03 00:57:33.084942 | orchestrator | Saturday 03 January 2026 00:47:40 +0000 (0:00:00.633) 0:01:19.957 ****** 2026-01-03 00:57:33.084949 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.084956 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.084962 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.084969 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.084976 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.084983 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.084989 | orchestrator | 2026-01-03 00:57:33.085027 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-03 00:57:33.085034 | orchestrator | Saturday 03 January 2026 00:47:41 +0000 (0:00:00.982) 0:01:20.939 ****** 2026-01-03 00:57:33.085041 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-03 00:57:33.085047 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-03 00:57:33.085053 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-03 00:57:33.085060 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-03 00:57:33.085067 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-03 00:57:33.085074 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-03 00:57:33.085080 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-03 00:57:33.085087 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-03 00:57:33.085093 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-03 00:57:33.085121 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-03 00:57:33.085128 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-03 00:57:33.085135 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-03 00:57:33.085140 | orchestrator | 2026-01-03 00:57:33.085146 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-03 00:57:33.085153 | orchestrator | Saturday 03 January 2026 00:47:43 +0000 (0:00:01.541) 0:01:22.481 ****** 2026-01-03 00:57:33.085159 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.085166 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.085173 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.085179 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.085185 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.085191 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.085198 | orchestrator | 2026-01-03 00:57:33.085205 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-03 00:57:33.085212 | orchestrator | Saturday 03 January 2026 00:47:44 +0000 (0:00:01.539) 0:01:24.020 ****** 2026-01-03 00:57:33.085227 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.085234 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.085241 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.085247 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.085253 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.085259 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.085265 | orchestrator | 2026-01-03 00:57:33.085272 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-03 00:57:33.085278 | orchestrator | Saturday 03 January 2026 00:47:45 +0000 (0:00:00.683) 0:01:24.704 ****** 2026-01-03 00:57:33.085285 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.085291 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.085297 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.085304 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.085310 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.085316 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.085323 | orchestrator | 2026-01-03 00:57:33.085329 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-03 00:57:33.085336 | orchestrator | Saturday 03 January 2026 00:47:46 +0000 (0:00:00.948) 0:01:25.652 ****** 2026-01-03 00:57:33.085342 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.085348 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.085354 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.085360 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.085366 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.085372 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.085378 | orchestrator | 2026-01-03 00:57:33.085384 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-03 00:57:33.085390 | orchestrator | Saturday 03 January 2026 00:47:47 +0000 (0:00:00.590) 0:01:26.243 ****** 2026-01-03 00:57:33.085396 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.085402 | orchestrator | 2026-01-03 00:57:33.085408 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-03 00:57:33.085414 | orchestrator | Saturday 03 January 2026 00:47:48 +0000 (0:00:01.327) 0:01:27.571 ****** 2026-01-03 00:57:33.085424 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.085430 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.085436 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.085442 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.085448 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.085454 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.085460 | orchestrator | 2026-01-03 00:57:33.085476 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-03 00:57:33.085488 | orchestrator | Saturday 03 January 2026 00:48:53 +0000 (0:01:04.981) 0:02:32.552 ****** 2026-01-03 00:57:33.085495 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-03 00:57:33.085501 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-03 00:57:33.085508 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-03 00:57:33.085515 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.085522 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-03 00:57:33.085528 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-03 00:57:33.085535 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-03 00:57:33.085542 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.085548 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-03 00:57:33.085555 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-03 00:57:33.085567 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-03 00:57:33.085574 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.085580 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-03 00:57:33.085587 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-03 00:57:33.085593 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-03 00:57:33.085599 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.085606 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-03 00:57:33.085613 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-03 00:57:33.085619 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-03 00:57:33.085648 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.085655 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-03 00:57:33.085662 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-03 00:57:33.085668 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-03 00:57:33.085675 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.085681 | orchestrator | 2026-01-03 00:57:33.085687 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-03 00:57:33.085694 | orchestrator | Saturday 03 January 2026 00:48:54 +0000 (0:00:00.633) 0:02:33.185 ****** 2026-01-03 00:57:33.085700 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.085706 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.085713 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.085719 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.085725 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.085731 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.085738 | orchestrator | 2026-01-03 00:57:33.085744 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-03 00:57:33.085750 | orchestrator | Saturday 03 January 2026 00:48:54 +0000 (0:00:00.781) 0:02:33.967 ****** 2026-01-03 00:57:33.085757 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.085763 | orchestrator | 2026-01-03 00:57:33.085770 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-03 00:57:33.085776 | orchestrator | Saturday 03 January 2026 00:48:54 +0000 (0:00:00.151) 0:02:34.118 ****** 2026-01-03 00:57:33.085783 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.085789 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.085795 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.085802 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.085807 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.085814 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.085820 | orchestrator | 2026-01-03 00:57:33.085826 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-03 00:57:33.085833 | orchestrator | Saturday 03 January 2026 00:48:55 +0000 (0:00:00.591) 0:02:34.710 ****** 2026-01-03 00:57:33.085839 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.085846 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.085852 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.085859 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.085865 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.085871 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.085878 | orchestrator | 2026-01-03 00:57:33.085884 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-03 00:57:33.085890 | orchestrator | Saturday 03 January 2026 00:48:56 +0000 (0:00:00.845) 0:02:35.555 ****** 2026-01-03 00:57:33.085897 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.085903 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.085910 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.085920 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.085927 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.085933 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.085940 | orchestrator | 2026-01-03 00:57:33.085946 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-03 00:57:33.085952 | orchestrator | Saturday 03 January 2026 00:48:57 +0000 (0:00:00.590) 0:02:36.145 ****** 2026-01-03 00:57:33.085959 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.085965 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.085971 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.085982 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.085989 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.086006 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.086047 | orchestrator | 2026-01-03 00:57:33.086056 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-03 00:57:33.086063 | orchestrator | Saturday 03 January 2026 00:48:59 +0000 (0:00:02.215) 0:02:38.361 ****** 2026-01-03 00:57:33.086070 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.086076 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.086083 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.086090 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.086096 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.086103 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.086109 | orchestrator | 2026-01-03 00:57:33.086116 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-03 00:57:33.086123 | orchestrator | Saturday 03 January 2026 00:48:59 +0000 (0:00:00.752) 0:02:39.113 ****** 2026-01-03 00:57:33.086130 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.086137 | orchestrator | 2026-01-03 00:57:33.086143 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-03 00:57:33.086149 | orchestrator | Saturday 03 January 2026 00:49:01 +0000 (0:00:01.488) 0:02:40.602 ****** 2026-01-03 00:57:33.086155 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.086161 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.086167 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.086174 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.086180 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.086187 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.086193 | orchestrator | 2026-01-03 00:57:33.086200 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-03 00:57:33.086207 | orchestrator | Saturday 03 January 2026 00:49:02 +0000 (0:00:00.820) 0:02:41.422 ****** 2026-01-03 00:57:33.086214 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.086220 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.086227 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.086233 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.086239 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.086246 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.086252 | orchestrator | 2026-01-03 00:57:33.086259 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-03 00:57:33.086265 | orchestrator | Saturday 03 January 2026 00:49:02 +0000 (0:00:00.594) 0:02:42.017 ****** 2026-01-03 00:57:33.086291 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.086298 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.086305 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.086310 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.086316 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.086322 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.086329 | orchestrator | 2026-01-03 00:57:33.086335 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-03 00:57:33.086342 | orchestrator | Saturday 03 January 2026 00:49:03 +0000 (0:00:00.783) 0:02:42.800 ****** 2026-01-03 00:57:33.086354 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.086360 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.086367 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.086373 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.086379 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.086386 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.086392 | orchestrator | 2026-01-03 00:57:33.086399 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-03 00:57:33.086405 | orchestrator | Saturday 03 January 2026 00:49:04 +0000 (0:00:00.647) 0:02:43.447 ****** 2026-01-03 00:57:33.086412 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.086418 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.086425 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.086431 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.086437 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.086444 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.086450 | orchestrator | 2026-01-03 00:57:33.086456 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-03 00:57:33.086463 | orchestrator | Saturday 03 January 2026 00:49:05 +0000 (0:00:00.813) 0:02:44.260 ****** 2026-01-03 00:57:33.086469 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.086476 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.086482 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.086489 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.086495 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.086502 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.086508 | orchestrator | 2026-01-03 00:57:33.086515 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-03 00:57:33.086522 | orchestrator | Saturday 03 January 2026 00:49:05 +0000 (0:00:00.559) 0:02:44.820 ****** 2026-01-03 00:57:33.086529 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.086535 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.086542 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.086548 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.086554 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.086561 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.086567 | orchestrator | 2026-01-03 00:57:33.086573 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-03 00:57:33.086580 | orchestrator | Saturday 03 January 2026 00:49:06 +0000 (0:00:00.694) 0:02:45.514 ****** 2026-01-03 00:57:33.086586 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.086592 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.086599 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.086605 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.086611 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.086617 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.086623 | orchestrator | 2026-01-03 00:57:33.086630 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-03 00:57:33.086636 | orchestrator | Saturday 03 January 2026 00:49:07 +0000 (0:00:00.672) 0:02:46.186 ****** 2026-01-03 00:57:33.086642 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.086652 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.086659 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.086665 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.086671 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.086678 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.086684 | orchestrator | 2026-01-03 00:57:33.086690 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-03 00:57:33.086696 | orchestrator | Saturday 03 January 2026 00:49:08 +0000 (0:00:01.180) 0:02:47.366 ****** 2026-01-03 00:57:33.086703 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.086714 | orchestrator | 2026-01-03 00:57:33.086720 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-03 00:57:33.086726 | orchestrator | Saturday 03 January 2026 00:49:09 +0000 (0:00:01.242) 0:02:48.609 ****** 2026-01-03 00:57:33.086733 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-01-03 00:57:33.086739 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-01-03 00:57:33.086746 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-03 00:57:33.086752 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-01-03 00:57:33.086758 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-01-03 00:57:33.086764 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-01-03 00:57:33.086771 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-01-03 00:57:33.086777 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-03 00:57:33.086784 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-03 00:57:33.086790 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-01-03 00:57:33.086796 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-03 00:57:33.086802 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-01-03 00:57:33.086808 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-03 00:57:33.086815 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-01-03 00:57:33.086821 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-03 00:57:33.086827 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-01-03 00:57:33.086833 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-03 00:57:33.086854 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-01-03 00:57:33.086861 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-03 00:57:33.086867 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-01-03 00:57:33.086873 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-03 00:57:33.086880 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-01-03 00:57:33.086886 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-03 00:57:33.086892 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-01-03 00:57:33.086898 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-03 00:57:33.086904 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-01-03 00:57:33.086911 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-03 00:57:33.086917 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-01-03 00:57:33.086923 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-03 00:57:33.086929 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-01-03 00:57:33.086936 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-03 00:57:33.086942 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-01-03 00:57:33.086948 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-03 00:57:33.086954 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-01-03 00:57:33.086960 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-03 00:57:33.086966 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-03 00:57:33.086972 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-01-03 00:57:33.086978 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-01-03 00:57:33.086985 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-03 00:57:33.086991 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-01-03 00:57:33.087031 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-03 00:57:33.087037 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-03 00:57:33.087050 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-01-03 00:57:33.087057 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-03 00:57:33.087063 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-01-03 00:57:33.087069 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-03 00:57:33.087075 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-01-03 00:57:33.087082 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-03 00:57:33.087088 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-01-03 00:57:33.087094 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-03 00:57:33.087100 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-03 00:57:33.087107 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-03 00:57:33.087117 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-01-03 00:57:33.087123 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-03 00:57:33.087129 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-03 00:57:33.087135 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-03 00:57:33.087142 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-03 00:57:33.087148 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-03 00:57:33.087154 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-03 00:57:33.087160 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-03 00:57:33.087167 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-03 00:57:33.087173 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-03 00:57:33.087179 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-03 00:57:33.087185 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-03 00:57:33.087192 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-03 00:57:33.087198 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-03 00:57:33.087204 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-03 00:57:33.087210 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-03 00:57:33.087216 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-03 00:57:33.087223 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-03 00:57:33.087229 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-03 00:57:33.087235 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-03 00:57:33.087241 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-03 00:57:33.087247 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-03 00:57:33.087254 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-01-03 00:57:33.087278 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-03 00:57:33.087286 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-03 00:57:33.087292 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-03 00:57:33.087299 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-03 00:57:33.087305 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-01-03 00:57:33.087312 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-03 00:57:33.087319 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-03 00:57:33.087329 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-03 00:57:33.087336 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-03 00:57:33.087343 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-03 00:57:33.087349 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-01-03 00:57:33.087356 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-03 00:57:33.087362 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-01-03 00:57:33.087369 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-01-03 00:57:33.087376 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-01-03 00:57:33.087382 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-01-03 00:57:33.087389 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-01-03 00:57:33.087395 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-01-03 00:57:33.087402 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-01-03 00:57:33.087408 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-01-03 00:57:33.087415 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-01-03 00:57:33.087421 | orchestrator | 2026-01-03 00:57:33.087428 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-03 00:57:33.087434 | orchestrator | Saturday 03 January 2026 00:49:15 +0000 (0:00:06.474) 0:02:55.083 ****** 2026-01-03 00:57:33.087441 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.087448 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.087454 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.087461 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.087468 | orchestrator | 2026-01-03 00:57:33.087475 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-03 00:57:33.087482 | orchestrator | Saturday 03 January 2026 00:49:16 +0000 (0:00:00.868) 0:02:55.951 ****** 2026-01-03 00:57:33.087488 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-03 00:57:33.087495 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-03 00:57:33.087505 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-03 00:57:33.087511 | orchestrator | 2026-01-03 00:57:33.087518 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-03 00:57:33.087525 | orchestrator | Saturday 03 January 2026 00:49:17 +0000 (0:00:01.142) 0:02:57.093 ****** 2026-01-03 00:57:33.087531 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-03 00:57:33.087538 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-03 00:57:33.087545 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-03 00:57:33.087551 | orchestrator | 2026-01-03 00:57:33.087558 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-03 00:57:33.087565 | orchestrator | Saturday 03 January 2026 00:49:19 +0000 (0:00:01.350) 0:02:58.444 ****** 2026-01-03 00:57:33.087571 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.087578 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.087585 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.087591 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.087598 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.087608 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.087615 | orchestrator | 2026-01-03 00:57:33.087622 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-03 00:57:33.087628 | orchestrator | Saturday 03 January 2026 00:49:19 +0000 (0:00:00.671) 0:02:59.115 ****** 2026-01-03 00:57:33.087635 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.087641 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.087648 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.087655 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.087661 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.087668 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.087674 | orchestrator | 2026-01-03 00:57:33.087681 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-03 00:57:33.087687 | orchestrator | Saturday 03 January 2026 00:49:21 +0000 (0:00:01.485) 0:03:00.600 ****** 2026-01-03 00:57:33.087694 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.087700 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.087707 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.087714 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.087720 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.087741 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.087748 | orchestrator | 2026-01-03 00:57:33.087754 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-03 00:57:33.087761 | orchestrator | Saturday 03 January 2026 00:49:22 +0000 (0:00:00.854) 0:03:01.454 ****** 2026-01-03 00:57:33.087768 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.087774 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.087781 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.087787 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.087794 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.087800 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.087807 | orchestrator | 2026-01-03 00:57:33.087813 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-03 00:57:33.087820 | orchestrator | Saturday 03 January 2026 00:49:23 +0000 (0:00:00.939) 0:03:02.394 ****** 2026-01-03 00:57:33.087827 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.087833 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.087840 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.087846 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.087853 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.087859 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.087866 | orchestrator | 2026-01-03 00:57:33.087873 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-03 00:57:33.087879 | orchestrator | Saturday 03 January 2026 00:49:24 +0000 (0:00:00.780) 0:03:03.175 ****** 2026-01-03 00:57:33.087886 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.087893 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.087899 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.087906 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.087913 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.087919 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.087926 | orchestrator | 2026-01-03 00:57:33.087932 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-03 00:57:33.087939 | orchestrator | Saturday 03 January 2026 00:49:25 +0000 (0:00:00.966) 0:03:04.142 ****** 2026-01-03 00:57:33.087946 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.087952 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.087959 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.087965 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.087972 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.087978 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.087985 | orchestrator | 2026-01-03 00:57:33.087991 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-03 00:57:33.088012 | orchestrator | Saturday 03 January 2026 00:49:25 +0000 (0:00:00.716) 0:03:04.858 ****** 2026-01-03 00:57:33.088018 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.088024 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.088029 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.088035 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.088042 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.088048 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.088055 | orchestrator | 2026-01-03 00:57:33.088062 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-03 00:57:33.088068 | orchestrator | Saturday 03 January 2026 00:49:26 +0000 (0:00:00.955) 0:03:05.814 ****** 2026-01-03 00:57:33.088075 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.088081 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.088088 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.088097 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.088104 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.088110 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.088117 | orchestrator | 2026-01-03 00:57:33.088124 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-03 00:57:33.088130 | orchestrator | Saturday 03 January 2026 00:49:29 +0000 (0:00:02.859) 0:03:08.673 ****** 2026-01-03 00:57:33.088137 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.088143 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.088150 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.088156 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.088163 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.088169 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.088175 | orchestrator | 2026-01-03 00:57:33.088181 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-03 00:57:33.088186 | orchestrator | Saturday 03 January 2026 00:49:30 +0000 (0:00:01.104) 0:03:09.778 ****** 2026-01-03 00:57:33.088192 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.088198 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.088204 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.088210 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.088216 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.088221 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.088227 | orchestrator | 2026-01-03 00:57:33.088233 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-03 00:57:33.088239 | orchestrator | Saturday 03 January 2026 00:49:31 +0000 (0:00:00.772) 0:03:10.550 ****** 2026-01-03 00:57:33.088246 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.088253 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.088259 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.088266 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.088272 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.088279 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.088286 | orchestrator | 2026-01-03 00:57:33.088292 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-03 00:57:33.088299 | orchestrator | Saturday 03 January 2026 00:49:32 +0000 (0:00:01.133) 0:03:11.684 ****** 2026-01-03 00:57:33.088305 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-03 00:57:33.088312 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-03 00:57:33.088336 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-03 00:57:33.088343 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.088350 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.088357 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.088371 | orchestrator | 2026-01-03 00:57:33.088378 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-03 00:57:33.088385 | orchestrator | Saturday 03 January 2026 00:49:33 +0000 (0:00:00.655) 0:03:12.339 ****** 2026-01-03 00:57:33.088393 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-01-03 00:57:33.088401 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-01-03 00:57:33.088409 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-01-03 00:57:33.088416 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-01-03 00:57:33.088423 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.088430 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-01-03 00:57:33.088437 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-01-03 00:57:33.088443 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.088450 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.088460 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.088466 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.088473 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.088480 | orchestrator | 2026-01-03 00:57:33.088486 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-03 00:57:33.088493 | orchestrator | Saturday 03 January 2026 00:49:34 +0000 (0:00:00.955) 0:03:13.294 ****** 2026-01-03 00:57:33.088499 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.088506 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.088512 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.088519 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.088525 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.088532 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.088538 | orchestrator | 2026-01-03 00:57:33.088545 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-03 00:57:33.088552 | orchestrator | Saturday 03 January 2026 00:49:34 +0000 (0:00:00.447) 0:03:13.742 ****** 2026-01-03 00:57:33.088558 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.088565 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.088571 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.088578 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.088585 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.088591 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.088598 | orchestrator | 2026-01-03 00:57:33.088608 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-03 00:57:33.088615 | orchestrator | Saturday 03 January 2026 00:49:35 +0000 (0:00:00.732) 0:03:14.474 ****** 2026-01-03 00:57:33.088621 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.088628 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.088634 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.088641 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.088647 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.088654 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.088660 | orchestrator | 2026-01-03 00:57:33.088667 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-03 00:57:33.088674 | orchestrator | Saturday 03 January 2026 00:49:35 +0000 (0:00:00.552) 0:03:15.027 ****** 2026-01-03 00:57:33.088680 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.088687 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.088693 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.088700 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.088706 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.088713 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.088719 | orchestrator | 2026-01-03 00:57:33.088740 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-03 00:57:33.088747 | orchestrator | Saturday 03 January 2026 00:49:36 +0000 (0:00:00.798) 0:03:15.825 ****** 2026-01-03 00:57:33.088753 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.088760 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.088767 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.088773 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.088780 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.088786 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.088793 | orchestrator | 2026-01-03 00:57:33.088799 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-03 00:57:33.088806 | orchestrator | Saturday 03 January 2026 00:49:37 +0000 (0:00:00.718) 0:03:16.543 ****** 2026-01-03 00:57:33.088812 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.088819 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.088825 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.088832 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.088839 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.088845 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.088852 | orchestrator | 2026-01-03 00:57:33.088858 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-03 00:57:33.088865 | orchestrator | Saturday 03 January 2026 00:49:38 +0000 (0:00:00.829) 0:03:17.373 ****** 2026-01-03 00:57:33.088871 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:57:33.088878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:57:33.088884 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:57:33.088891 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.088897 | orchestrator | 2026-01-03 00:57:33.088904 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-03 00:57:33.088911 | orchestrator | Saturday 03 January 2026 00:49:38 +0000 (0:00:00.445) 0:03:17.819 ****** 2026-01-03 00:57:33.088917 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:57:33.088924 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:57:33.088930 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:57:33.088991 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.089090 | orchestrator | 2026-01-03 00:57:33.089098 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-03 00:57:33.089105 | orchestrator | Saturday 03 January 2026 00:49:39 +0000 (0:00:00.361) 0:03:18.180 ****** 2026-01-03 00:57:33.089111 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:57:33.089122 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:57:33.089129 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:57:33.089136 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.089142 | orchestrator | 2026-01-03 00:57:33.089149 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-03 00:57:33.089155 | orchestrator | Saturday 03 January 2026 00:49:39 +0000 (0:00:00.350) 0:03:18.531 ****** 2026-01-03 00:57:33.089162 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.089169 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.089175 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.089182 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.089188 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.089195 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.089201 | orchestrator | 2026-01-03 00:57:33.089211 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-03 00:57:33.089217 | orchestrator | Saturday 03 January 2026 00:49:39 +0000 (0:00:00.495) 0:03:19.026 ****** 2026-01-03 00:57:33.089224 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-03 00:57:33.089231 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-03 00:57:33.089237 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-03 00:57:33.089244 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.089251 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-03 00:57:33.089257 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.089264 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-03 00:57:33.089270 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-03 00:57:33.089277 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.089284 | orchestrator | 2026-01-03 00:57:33.089290 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-03 00:57:33.089296 | orchestrator | Saturday 03 January 2026 00:49:42 +0000 (0:00:02.428) 0:03:21.455 ****** 2026-01-03 00:57:33.089302 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.089308 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.089314 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.089320 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.089327 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.089333 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.089340 | orchestrator | 2026-01-03 00:57:33.089347 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-03 00:57:33.089353 | orchestrator | Saturday 03 January 2026 00:49:45 +0000 (0:00:03.011) 0:03:24.466 ****** 2026-01-03 00:57:33.089360 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.089367 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.089373 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.089379 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.089386 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.089392 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.089399 | orchestrator | 2026-01-03 00:57:33.089406 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-03 00:57:33.089412 | orchestrator | Saturday 03 January 2026 00:49:46 +0000 (0:00:01.042) 0:03:25.509 ****** 2026-01-03 00:57:33.089419 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.089426 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.089432 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.089439 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.089446 | orchestrator | 2026-01-03 00:57:33.089472 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-03 00:57:33.089479 | orchestrator | Saturday 03 January 2026 00:49:47 +0000 (0:00:00.904) 0:03:26.413 ****** 2026-01-03 00:57:33.089485 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.089492 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.089509 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.089516 | orchestrator | 2026-01-03 00:57:33.089522 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-03 00:57:33.089529 | orchestrator | Saturday 03 January 2026 00:49:47 +0000 (0:00:00.279) 0:03:26.693 ****** 2026-01-03 00:57:33.089536 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.089542 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.089549 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.089555 | orchestrator | 2026-01-03 00:57:33.089562 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-03 00:57:33.089569 | orchestrator | Saturday 03 January 2026 00:49:48 +0000 (0:00:01.344) 0:03:28.038 ****** 2026-01-03 00:57:33.089575 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-03 00:57:33.089582 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-03 00:57:33.089588 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-03 00:57:33.089595 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.089601 | orchestrator | 2026-01-03 00:57:33.089608 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-03 00:57:33.089615 | orchestrator | Saturday 03 January 2026 00:49:49 +0000 (0:00:00.619) 0:03:28.657 ****** 2026-01-03 00:57:33.089621 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.089628 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.089634 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.089641 | orchestrator | 2026-01-03 00:57:33.089648 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-03 00:57:33.089654 | orchestrator | Saturday 03 January 2026 00:49:49 +0000 (0:00:00.330) 0:03:28.987 ****** 2026-01-03 00:57:33.089661 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.089668 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.089674 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.089681 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.089688 | orchestrator | 2026-01-03 00:57:33.089694 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-03 00:57:33.089701 | orchestrator | Saturday 03 January 2026 00:49:50 +0000 (0:00:00.901) 0:03:29.888 ****** 2026-01-03 00:57:33.089707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:57:33.089714 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:57:33.089721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:57:33.089727 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.089734 | orchestrator | 2026-01-03 00:57:33.089741 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-03 00:57:33.089748 | orchestrator | Saturday 03 January 2026 00:49:51 +0000 (0:00:00.501) 0:03:30.390 ****** 2026-01-03 00:57:33.089754 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.089761 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.089767 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.089774 | orchestrator | 2026-01-03 00:57:33.089781 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-03 00:57:33.089787 | orchestrator | Saturday 03 January 2026 00:49:51 +0000 (0:00:00.398) 0:03:30.788 ****** 2026-01-03 00:57:33.089797 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.089804 | orchestrator | 2026-01-03 00:57:33.089810 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-03 00:57:33.089817 | orchestrator | Saturday 03 January 2026 00:49:51 +0000 (0:00:00.263) 0:03:31.052 ****** 2026-01-03 00:57:33.089824 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.089830 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.089837 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.089843 | orchestrator | 2026-01-03 00:57:33.089850 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-03 00:57:33.089861 | orchestrator | Saturday 03 January 2026 00:49:52 +0000 (0:00:00.423) 0:03:31.475 ****** 2026-01-03 00:57:33.089867 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.089874 | orchestrator | 2026-01-03 00:57:33.089880 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-03 00:57:33.089887 | orchestrator | Saturday 03 January 2026 00:49:52 +0000 (0:00:00.287) 0:03:31.763 ****** 2026-01-03 00:57:33.089893 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.089900 | orchestrator | 2026-01-03 00:57:33.089907 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-03 00:57:33.089913 | orchestrator | Saturday 03 January 2026 00:49:52 +0000 (0:00:00.234) 0:03:31.997 ****** 2026-01-03 00:57:33.089920 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.089926 | orchestrator | 2026-01-03 00:57:33.089933 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-03 00:57:33.089939 | orchestrator | Saturday 03 January 2026 00:49:52 +0000 (0:00:00.132) 0:03:32.129 ****** 2026-01-03 00:57:33.089946 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.089953 | orchestrator | 2026-01-03 00:57:33.089959 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-03 00:57:33.089966 | orchestrator | Saturday 03 January 2026 00:49:53 +0000 (0:00:00.872) 0:03:33.002 ****** 2026-01-03 00:57:33.089973 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.089979 | orchestrator | 2026-01-03 00:57:33.089986 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-03 00:57:33.089992 | orchestrator | Saturday 03 January 2026 00:49:54 +0000 (0:00:00.296) 0:03:33.298 ****** 2026-01-03 00:57:33.090035 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:57:33.090044 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:57:33.090051 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:57:33.090057 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.090064 | orchestrator | 2026-01-03 00:57:33.090087 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-03 00:57:33.090094 | orchestrator | Saturday 03 January 2026 00:49:54 +0000 (0:00:00.486) 0:03:33.785 ****** 2026-01-03 00:57:33.090100 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.090107 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.090113 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.090120 | orchestrator | 2026-01-03 00:57:33.090126 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-03 00:57:33.090133 | orchestrator | Saturday 03 January 2026 00:49:55 +0000 (0:00:00.356) 0:03:34.141 ****** 2026-01-03 00:57:33.090139 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.090146 | orchestrator | 2026-01-03 00:57:33.090153 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-03 00:57:33.090159 | orchestrator | Saturday 03 January 2026 00:49:55 +0000 (0:00:00.320) 0:03:34.462 ****** 2026-01-03 00:57:33.090166 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.090172 | orchestrator | 2026-01-03 00:57:33.090179 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-03 00:57:33.090185 | orchestrator | Saturday 03 January 2026 00:49:55 +0000 (0:00:00.240) 0:03:34.702 ****** 2026-01-03 00:57:33.090192 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.090198 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.090205 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.090212 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.090218 | orchestrator | 2026-01-03 00:57:33.090225 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-03 00:57:33.090231 | orchestrator | Saturday 03 January 2026 00:49:56 +0000 (0:00:01.159) 0:03:35.862 ****** 2026-01-03 00:57:33.090238 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.090244 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.090256 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.090262 | orchestrator | 2026-01-03 00:57:33.090269 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-03 00:57:33.090276 | orchestrator | Saturday 03 January 2026 00:49:57 +0000 (0:00:00.334) 0:03:36.197 ****** 2026-01-03 00:57:33.090282 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.090288 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.090294 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.090300 | orchestrator | 2026-01-03 00:57:33.090307 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-03 00:57:33.090313 | orchestrator | Saturday 03 January 2026 00:49:58 +0000 (0:00:01.146) 0:03:37.343 ****** 2026-01-03 00:57:33.090319 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:57:33.090326 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:57:33.090332 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:57:33.090339 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.090345 | orchestrator | 2026-01-03 00:57:33.090352 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-03 00:57:33.090358 | orchestrator | Saturday 03 January 2026 00:49:59 +0000 (0:00:00.984) 0:03:38.328 ****** 2026-01-03 00:57:33.090365 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.090372 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.090378 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.090385 | orchestrator | 2026-01-03 00:57:33.090391 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-03 00:57:33.090401 | orchestrator | Saturday 03 January 2026 00:49:59 +0000 (0:00:00.658) 0:03:38.986 ****** 2026-01-03 00:57:33.090407 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.090414 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.090421 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.090427 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.090434 | orchestrator | 2026-01-03 00:57:33.090440 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-03 00:57:33.090447 | orchestrator | Saturday 03 January 2026 00:50:00 +0000 (0:00:00.857) 0:03:39.845 ****** 2026-01-03 00:57:33.090454 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.090460 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.090467 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.090473 | orchestrator | 2026-01-03 00:57:33.090480 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-03 00:57:33.090487 | orchestrator | Saturday 03 January 2026 00:50:01 +0000 (0:00:00.587) 0:03:40.433 ****** 2026-01-03 00:57:33.090493 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.090500 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.090506 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.090513 | orchestrator | 2026-01-03 00:57:33.090520 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-03 00:57:33.090526 | orchestrator | Saturday 03 January 2026 00:50:02 +0000 (0:00:01.321) 0:03:41.754 ****** 2026-01-03 00:57:33.090533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:57:33.090540 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:57:33.090546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:57:33.090553 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.090559 | orchestrator | 2026-01-03 00:57:33.090566 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-03 00:57:33.090573 | orchestrator | Saturday 03 January 2026 00:50:03 +0000 (0:00:00.614) 0:03:42.369 ****** 2026-01-03 00:57:33.090579 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.090586 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.090592 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.090599 | orchestrator | 2026-01-03 00:57:33.090610 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-01-03 00:57:33.090616 | orchestrator | Saturday 03 January 2026 00:50:03 +0000 (0:00:00.370) 0:03:42.740 ****** 2026-01-03 00:57:33.090623 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.090630 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.090651 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.090658 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.090664 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.090671 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.090677 | orchestrator | 2026-01-03 00:57:33.090684 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-03 00:57:33.090691 | orchestrator | Saturday 03 January 2026 00:50:04 +0000 (0:00:01.004) 0:03:43.744 ****** 2026-01-03 00:57:33.090697 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.090704 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.090711 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.090717 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.090724 | orchestrator | 2026-01-03 00:57:33.090730 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-03 00:57:33.090737 | orchestrator | Saturday 03 January 2026 00:50:05 +0000 (0:00:00.970) 0:03:44.715 ****** 2026-01-03 00:57:33.090744 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.090750 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.090757 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.090764 | orchestrator | 2026-01-03 00:57:33.090770 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-03 00:57:33.090777 | orchestrator | Saturday 03 January 2026 00:50:06 +0000 (0:00:00.504) 0:03:45.219 ****** 2026-01-03 00:57:33.090783 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.090790 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.090797 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.090803 | orchestrator | 2026-01-03 00:57:33.090810 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-03 00:57:33.090816 | orchestrator | Saturday 03 January 2026 00:50:07 +0000 (0:00:01.343) 0:03:46.563 ****** 2026-01-03 00:57:33.090823 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-03 00:57:33.090830 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-03 00:57:33.090836 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-03 00:57:33.090843 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.090849 | orchestrator | 2026-01-03 00:57:33.090856 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-03 00:57:33.090862 | orchestrator | Saturday 03 January 2026 00:50:07 +0000 (0:00:00.550) 0:03:47.113 ****** 2026-01-03 00:57:33.090869 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.090875 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.090882 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.090889 | orchestrator | 2026-01-03 00:57:33.090895 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-01-03 00:57:33.090902 | orchestrator | 2026-01-03 00:57:33.090908 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-03 00:57:33.090915 | orchestrator | Saturday 03 January 2026 00:50:08 +0000 (0:00:00.487) 0:03:47.600 ****** 2026-01-03 00:57:33.090922 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.090929 | orchestrator | 2026-01-03 00:57:33.090935 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-03 00:57:33.090942 | orchestrator | Saturday 03 January 2026 00:50:09 +0000 (0:00:00.650) 0:03:48.251 ****** 2026-01-03 00:57:33.090951 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.090962 | orchestrator | 2026-01-03 00:57:33.090969 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-03 00:57:33.090975 | orchestrator | Saturday 03 January 2026 00:50:09 +0000 (0:00:00.504) 0:03:48.755 ****** 2026-01-03 00:57:33.090982 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.090988 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.091023 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.091031 | orchestrator | 2026-01-03 00:57:33.091038 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-03 00:57:33.091044 | orchestrator | Saturday 03 January 2026 00:50:10 +0000 (0:00:01.051) 0:03:49.807 ****** 2026-01-03 00:57:33.091051 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.091057 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.091064 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.091070 | orchestrator | 2026-01-03 00:57:33.091077 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-03 00:57:33.091083 | orchestrator | Saturday 03 January 2026 00:50:10 +0000 (0:00:00.263) 0:03:50.071 ****** 2026-01-03 00:57:33.091090 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.091096 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.091102 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.091108 | orchestrator | 2026-01-03 00:57:33.091114 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-03 00:57:33.091120 | orchestrator | Saturday 03 January 2026 00:50:11 +0000 (0:00:00.263) 0:03:50.334 ****** 2026-01-03 00:57:33.091126 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.091132 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.091139 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.091145 | orchestrator | 2026-01-03 00:57:33.091151 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-03 00:57:33.091157 | orchestrator | Saturday 03 January 2026 00:50:11 +0000 (0:00:00.287) 0:03:50.621 ****** 2026-01-03 00:57:33.091164 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.091193 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.091200 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.091206 | orchestrator | 2026-01-03 00:57:33.091212 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-03 00:57:33.091218 | orchestrator | Saturday 03 January 2026 00:50:12 +0000 (0:00:00.970) 0:03:51.592 ****** 2026-01-03 00:57:33.091225 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.091231 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.091237 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.091243 | orchestrator | 2026-01-03 00:57:33.091250 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-03 00:57:33.091272 | orchestrator | Saturday 03 January 2026 00:50:12 +0000 (0:00:00.300) 0:03:51.893 ****** 2026-01-03 00:57:33.091279 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.091285 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.091291 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.091297 | orchestrator | 2026-01-03 00:57:33.091304 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-03 00:57:33.091310 | orchestrator | Saturday 03 January 2026 00:50:13 +0000 (0:00:00.274) 0:03:52.167 ****** 2026-01-03 00:57:33.091317 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.091323 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.091330 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.091336 | orchestrator | 2026-01-03 00:57:33.091343 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-03 00:57:33.091350 | orchestrator | Saturday 03 January 2026 00:50:13 +0000 (0:00:00.754) 0:03:52.922 ****** 2026-01-03 00:57:33.091356 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.091363 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.091369 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.091376 | orchestrator | 2026-01-03 00:57:33.091382 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-03 00:57:33.091396 | orchestrator | Saturday 03 January 2026 00:50:14 +0000 (0:00:00.940) 0:03:53.863 ****** 2026-01-03 00:57:33.091402 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.091409 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.091415 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.091422 | orchestrator | 2026-01-03 00:57:33.091429 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-03 00:57:33.091435 | orchestrator | Saturday 03 January 2026 00:50:15 +0000 (0:00:00.309) 0:03:54.172 ****** 2026-01-03 00:57:33.091442 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.091448 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.091455 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.091461 | orchestrator | 2026-01-03 00:57:33.091468 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-03 00:57:33.091475 | orchestrator | Saturday 03 January 2026 00:50:15 +0000 (0:00:00.373) 0:03:54.546 ****** 2026-01-03 00:57:33.091481 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.091488 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.091494 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.091501 | orchestrator | 2026-01-03 00:57:33.091507 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-03 00:57:33.091514 | orchestrator | Saturday 03 January 2026 00:50:15 +0000 (0:00:00.327) 0:03:54.874 ****** 2026-01-03 00:57:33.091521 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.091527 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.091534 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.091540 | orchestrator | 2026-01-03 00:57:33.091547 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-03 00:57:33.091553 | orchestrator | Saturday 03 January 2026 00:50:16 +0000 (0:00:00.299) 0:03:55.173 ****** 2026-01-03 00:57:33.091560 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.091566 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.091573 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.091579 | orchestrator | 2026-01-03 00:57:33.091586 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-03 00:57:33.091593 | orchestrator | Saturday 03 January 2026 00:50:16 +0000 (0:00:00.650) 0:03:55.824 ****** 2026-01-03 00:57:33.091599 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.091606 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.091612 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.091618 | orchestrator | 2026-01-03 00:57:33.091628 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-03 00:57:33.091635 | orchestrator | Saturday 03 January 2026 00:50:17 +0000 (0:00:00.324) 0:03:56.148 ****** 2026-01-03 00:57:33.091641 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.091647 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.091653 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.091660 | orchestrator | 2026-01-03 00:57:33.091665 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-03 00:57:33.091671 | orchestrator | Saturday 03 January 2026 00:50:17 +0000 (0:00:00.342) 0:03:56.490 ****** 2026-01-03 00:57:33.091677 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.091683 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.091690 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.091697 | orchestrator | 2026-01-03 00:57:33.091703 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-03 00:57:33.091710 | orchestrator | Saturday 03 January 2026 00:50:17 +0000 (0:00:00.330) 0:03:56.821 ****** 2026-01-03 00:57:33.091717 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.091723 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.091753 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.091759 | orchestrator | 2026-01-03 00:57:33.091766 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-03 00:57:33.091773 | orchestrator | Saturday 03 January 2026 00:50:18 +0000 (0:00:00.721) 0:03:57.543 ****** 2026-01-03 00:57:33.091787 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.091793 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.091800 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.091806 | orchestrator | 2026-01-03 00:57:33.091813 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-03 00:57:33.091819 | orchestrator | Saturday 03 January 2026 00:50:18 +0000 (0:00:00.565) 0:03:58.109 ****** 2026-01-03 00:57:33.091826 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.091833 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.091839 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.091846 | orchestrator | 2026-01-03 00:57:33.091852 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-03 00:57:33.091859 | orchestrator | Saturday 03 January 2026 00:50:19 +0000 (0:00:00.434) 0:03:58.543 ****** 2026-01-03 00:57:33.091866 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.091872 | orchestrator | 2026-01-03 00:57:33.091879 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-03 00:57:33.091902 | orchestrator | Saturday 03 January 2026 00:50:20 +0000 (0:00:00.891) 0:03:59.435 ****** 2026-01-03 00:57:33.091909 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.091916 | orchestrator | 2026-01-03 00:57:33.091923 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-03 00:57:33.091929 | orchestrator | Saturday 03 January 2026 00:50:20 +0000 (0:00:00.171) 0:03:59.606 ****** 2026-01-03 00:57:33.091936 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-03 00:57:33.091942 | orchestrator | 2026-01-03 00:57:33.091949 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-03 00:57:33.091955 | orchestrator | Saturday 03 January 2026 00:50:21 +0000 (0:00:01.112) 0:04:00.719 ****** 2026-01-03 00:57:33.091962 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.091968 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.091975 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.091981 | orchestrator | 2026-01-03 00:57:33.091988 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-03 00:57:33.092022 | orchestrator | Saturday 03 January 2026 00:50:22 +0000 (0:00:00.524) 0:04:01.244 ****** 2026-01-03 00:57:33.092029 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.092036 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.092042 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.092049 | orchestrator | 2026-01-03 00:57:33.092055 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-03 00:57:33.092062 | orchestrator | Saturday 03 January 2026 00:50:22 +0000 (0:00:00.375) 0:04:01.619 ****** 2026-01-03 00:57:33.092068 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.092075 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.092097 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.092104 | orchestrator | 2026-01-03 00:57:33.092111 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-03 00:57:33.092118 | orchestrator | Saturday 03 January 2026 00:50:23 +0000 (0:00:01.487) 0:04:03.106 ****** 2026-01-03 00:57:33.092124 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.092131 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.092137 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.092144 | orchestrator | 2026-01-03 00:57:33.092151 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-03 00:57:33.092157 | orchestrator | Saturday 03 January 2026 00:50:24 +0000 (0:00:00.773) 0:04:03.880 ****** 2026-01-03 00:57:33.092163 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.092169 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.092175 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.092182 | orchestrator | 2026-01-03 00:57:33.092187 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-03 00:57:33.092194 | orchestrator | Saturday 03 January 2026 00:50:25 +0000 (0:00:00.725) 0:04:04.605 ****** 2026-01-03 00:57:33.092206 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.092213 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.092220 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.092226 | orchestrator | 2026-01-03 00:57:33.092233 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-03 00:57:33.092240 | orchestrator | Saturday 03 January 2026 00:50:26 +0000 (0:00:00.640) 0:04:05.246 ****** 2026-01-03 00:57:33.092246 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.092253 | orchestrator | 2026-01-03 00:57:33.092260 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-03 00:57:33.092266 | orchestrator | Saturday 03 January 2026 00:50:27 +0000 (0:00:01.417) 0:04:06.664 ****** 2026-01-03 00:57:33.092273 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.092280 | orchestrator | 2026-01-03 00:57:33.092286 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-03 00:57:33.092296 | orchestrator | Saturday 03 January 2026 00:50:28 +0000 (0:00:01.107) 0:04:07.771 ****** 2026-01-03 00:57:33.092302 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-03 00:57:33.092309 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:57:33.092316 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:57:33.092322 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-03 00:57:33.092329 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-03 00:57:33.092335 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-03 00:57:33.092342 | orchestrator | changed: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-03 00:57:33.092348 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2026-01-03 00:57:33.092355 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-03 00:57:33.092362 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-01-03 00:57:33.092368 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-01-03 00:57:33.092375 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-01-03 00:57:33.092382 | orchestrator | 2026-01-03 00:57:33.092388 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-03 00:57:33.092395 | orchestrator | Saturday 03 January 2026 00:50:32 +0000 (0:00:03.837) 0:04:11.609 ****** 2026-01-03 00:57:33.092402 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.092408 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.092415 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.092421 | orchestrator | 2026-01-03 00:57:33.092428 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-03 00:57:33.092435 | orchestrator | Saturday 03 January 2026 00:50:33 +0000 (0:00:01.251) 0:04:12.860 ****** 2026-01-03 00:57:33.092441 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.092448 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.092454 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.092461 | orchestrator | 2026-01-03 00:57:33.092468 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-03 00:57:33.092474 | orchestrator | Saturday 03 January 2026 00:50:34 +0000 (0:00:00.376) 0:04:13.237 ****** 2026-01-03 00:57:33.092481 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.092487 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.092494 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.092501 | orchestrator | 2026-01-03 00:57:33.092507 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-03 00:57:33.092530 | orchestrator | Saturday 03 January 2026 00:50:34 +0000 (0:00:00.774) 0:04:14.011 ****** 2026-01-03 00:57:33.092537 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.092543 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.092550 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.092556 | orchestrator | 2026-01-03 00:57:33.092563 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-03 00:57:33.092575 | orchestrator | Saturday 03 January 2026 00:50:37 +0000 (0:00:02.130) 0:04:16.141 ****** 2026-01-03 00:57:33.092581 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.092588 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.092595 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.092601 | orchestrator | 2026-01-03 00:57:33.092608 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-03 00:57:33.092615 | orchestrator | Saturday 03 January 2026 00:50:38 +0000 (0:00:01.190) 0:04:17.332 ****** 2026-01-03 00:57:33.092621 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.092628 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.092634 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.092641 | orchestrator | 2026-01-03 00:57:33.092648 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-03 00:57:33.092654 | orchestrator | Saturday 03 January 2026 00:50:38 +0000 (0:00:00.309) 0:04:17.642 ****** 2026-01-03 00:57:33.092660 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.092667 | orchestrator | 2026-01-03 00:57:33.092673 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-03 00:57:33.092680 | orchestrator | Saturday 03 January 2026 00:50:39 +0000 (0:00:00.764) 0:04:18.407 ****** 2026-01-03 00:57:33.092686 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.092693 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.092699 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.092706 | orchestrator | 2026-01-03 00:57:33.092713 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-03 00:57:33.092719 | orchestrator | Saturday 03 January 2026 00:50:40 +0000 (0:00:01.009) 0:04:19.416 ****** 2026-01-03 00:57:33.092726 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.092732 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.092739 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.092746 | orchestrator | 2026-01-03 00:57:33.092752 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-03 00:57:33.092759 | orchestrator | Saturday 03 January 2026 00:50:40 +0000 (0:00:00.401) 0:04:19.818 ****** 2026-01-03 00:57:33.092766 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.092772 | orchestrator | 2026-01-03 00:57:33.092779 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-03 00:57:33.092785 | orchestrator | Saturday 03 January 2026 00:50:41 +0000 (0:00:00.893) 0:04:20.711 ****** 2026-01-03 00:57:33.092792 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.092798 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.092805 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.092811 | orchestrator | 2026-01-03 00:57:33.092818 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-03 00:57:33.092824 | orchestrator | Saturday 03 January 2026 00:50:43 +0000 (0:00:02.171) 0:04:22.883 ****** 2026-01-03 00:57:33.092831 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.092837 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.092846 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.092853 | orchestrator | 2026-01-03 00:57:33.092860 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-03 00:57:33.092866 | orchestrator | Saturday 03 January 2026 00:50:45 +0000 (0:00:01.283) 0:04:24.166 ****** 2026-01-03 00:57:33.092873 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.092879 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.092886 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.092892 | orchestrator | 2026-01-03 00:57:33.092899 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-03 00:57:33.092905 | orchestrator | Saturday 03 January 2026 00:50:47 +0000 (0:00:02.118) 0:04:26.285 ****** 2026-01-03 00:57:33.092916 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.092923 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.092929 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.092936 | orchestrator | 2026-01-03 00:57:33.092942 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-03 00:57:33.092949 | orchestrator | Saturday 03 January 2026 00:50:49 +0000 (0:00:02.202) 0:04:28.488 ****** 2026-01-03 00:57:33.092955 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.092962 | orchestrator | 2026-01-03 00:57:33.092968 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-03 00:57:33.092975 | orchestrator | Saturday 03 January 2026 00:50:50 +0000 (0:00:00.677) 0:04:29.166 ****** 2026-01-03 00:57:33.092981 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.092988 | orchestrator | 2026-01-03 00:57:33.093005 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-03 00:57:33.093012 | orchestrator | Saturday 03 January 2026 00:50:52 +0000 (0:00:01.992) 0:04:31.158 ****** 2026-01-03 00:57:33.093018 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.093025 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.093032 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.093038 | orchestrator | 2026-01-03 00:57:33.093045 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-03 00:57:33.093052 | orchestrator | Saturday 03 January 2026 00:51:02 +0000 (0:00:10.277) 0:04:41.436 ****** 2026-01-03 00:57:33.093058 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.093065 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.093071 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.093078 | orchestrator | 2026-01-03 00:57:33.093084 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-03 00:57:33.093106 | orchestrator | Saturday 03 January 2026 00:51:03 +0000 (0:00:00.976) 0:04:42.413 ****** 2026-01-03 00:57:33.093115 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__46634a02abd21c7efb8fcd3cec55726ed1d8ccef'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-03 00:57:33.093123 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__46634a02abd21c7efb8fcd3cec55726ed1d8ccef'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-03 00:57:33.093131 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__46634a02abd21c7efb8fcd3cec55726ed1d8ccef'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-03 00:57:33.093139 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__46634a02abd21c7efb8fcd3cec55726ed1d8ccef'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-03 00:57:33.093146 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__46634a02abd21c7efb8fcd3cec55726ed1d8ccef'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-03 00:57:33.093157 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__46634a02abd21c7efb8fcd3cec55726ed1d8ccef'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__46634a02abd21c7efb8fcd3cec55726ed1d8ccef'}])  2026-01-03 00:57:33.093165 | orchestrator | 2026-01-03 00:57:33.093174 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-03 00:57:33.093181 | orchestrator | Saturday 03 January 2026 00:51:17 +0000 (0:00:14.105) 0:04:56.518 ****** 2026-01-03 00:57:33.093188 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.093194 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.093201 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.093207 | orchestrator | 2026-01-03 00:57:33.093214 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-03 00:57:33.093220 | orchestrator | Saturday 03 January 2026 00:51:17 +0000 (0:00:00.327) 0:04:56.846 ****** 2026-01-03 00:57:33.093227 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.093234 | orchestrator | 2026-01-03 00:57:33.093240 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-03 00:57:33.093247 | orchestrator | Saturday 03 January 2026 00:51:18 +0000 (0:00:00.801) 0:04:57.647 ****** 2026-01-03 00:57:33.093253 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.093260 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.093266 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.093273 | orchestrator | 2026-01-03 00:57:33.093280 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-03 00:57:33.093286 | orchestrator | Saturday 03 January 2026 00:51:18 +0000 (0:00:00.326) 0:04:57.973 ****** 2026-01-03 00:57:33.093293 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.093299 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.093306 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.093313 | orchestrator | 2026-01-03 00:57:33.093319 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-03 00:57:33.093326 | orchestrator | Saturday 03 January 2026 00:51:19 +0000 (0:00:00.454) 0:04:58.428 ****** 2026-01-03 00:57:33.093332 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-03 00:57:33.093339 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-03 00:57:33.093345 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-03 00:57:33.093352 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.093358 | orchestrator | 2026-01-03 00:57:33.093364 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-03 00:57:33.093371 | orchestrator | Saturday 03 January 2026 00:51:20 +0000 (0:00:00.910) 0:04:59.339 ****** 2026-01-03 00:57:33.093378 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.093385 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.093404 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.093411 | orchestrator | 2026-01-03 00:57:33.093418 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-01-03 00:57:33.093424 | orchestrator | 2026-01-03 00:57:33.093431 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-03 00:57:33.093438 | orchestrator | Saturday 03 January 2026 00:51:21 +0000 (0:00:00.910) 0:05:00.250 ****** 2026-01-03 00:57:33.093444 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.093451 | orchestrator | 2026-01-03 00:57:33.093457 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-03 00:57:33.093464 | orchestrator | Saturday 03 January 2026 00:51:21 +0000 (0:00:00.504) 0:05:00.755 ****** 2026-01-03 00:57:33.093471 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.093481 | orchestrator | 2026-01-03 00:57:33.093487 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-03 00:57:33.093494 | orchestrator | Saturday 03 January 2026 00:51:22 +0000 (0:00:00.619) 0:05:01.374 ****** 2026-01-03 00:57:33.093500 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.093507 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.093513 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.093520 | orchestrator | 2026-01-03 00:57:33.093527 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-03 00:57:33.093533 | orchestrator | Saturday 03 January 2026 00:51:22 +0000 (0:00:00.704) 0:05:02.078 ****** 2026-01-03 00:57:33.093540 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.093546 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.093553 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.093559 | orchestrator | 2026-01-03 00:57:33.093566 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-03 00:57:33.093573 | orchestrator | Saturday 03 January 2026 00:51:23 +0000 (0:00:00.316) 0:05:02.395 ****** 2026-01-03 00:57:33.093579 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.093586 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.093592 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.093599 | orchestrator | 2026-01-03 00:57:33.093605 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-03 00:57:33.093612 | orchestrator | Saturday 03 January 2026 00:51:23 +0000 (0:00:00.422) 0:05:02.818 ****** 2026-01-03 00:57:33.093618 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.093625 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.093632 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.093638 | orchestrator | 2026-01-03 00:57:33.093644 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-03 00:57:33.093651 | orchestrator | Saturday 03 January 2026 00:51:23 +0000 (0:00:00.277) 0:05:03.095 ****** 2026-01-03 00:57:33.093658 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.093664 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.093671 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.093677 | orchestrator | 2026-01-03 00:57:33.093684 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-03 00:57:33.093691 | orchestrator | Saturday 03 January 2026 00:51:24 +0000 (0:00:00.699) 0:05:03.794 ****** 2026-01-03 00:57:33.093697 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.093704 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.093710 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.093717 | orchestrator | 2026-01-03 00:57:33.093726 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-03 00:57:33.093732 | orchestrator | Saturday 03 January 2026 00:51:24 +0000 (0:00:00.293) 0:05:04.088 ****** 2026-01-03 00:57:33.093739 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.093746 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.093752 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.093759 | orchestrator | 2026-01-03 00:57:33.093765 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-03 00:57:33.093772 | orchestrator | Saturday 03 January 2026 00:51:25 +0000 (0:00:00.273) 0:05:04.361 ****** 2026-01-03 00:57:33.093778 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.093785 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.093792 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.093798 | orchestrator | 2026-01-03 00:57:33.093805 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-03 00:57:33.093811 | orchestrator | Saturday 03 January 2026 00:51:26 +0000 (0:00:00.895) 0:05:05.257 ****** 2026-01-03 00:57:33.093818 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.093824 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.093831 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.093841 | orchestrator | 2026-01-03 00:57:33.093848 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-03 00:57:33.093854 | orchestrator | Saturday 03 January 2026 00:51:26 +0000 (0:00:00.715) 0:05:05.972 ****** 2026-01-03 00:57:33.093861 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.093867 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.093874 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.093881 | orchestrator | 2026-01-03 00:57:33.093887 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-03 00:57:33.093894 | orchestrator | Saturday 03 January 2026 00:51:27 +0000 (0:00:00.280) 0:05:06.253 ****** 2026-01-03 00:57:33.093900 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.093907 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.093913 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.093920 | orchestrator | 2026-01-03 00:57:33.093927 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-03 00:57:33.093933 | orchestrator | Saturday 03 January 2026 00:51:27 +0000 (0:00:00.289) 0:05:06.543 ****** 2026-01-03 00:57:33.093940 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.093946 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.093953 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.093959 | orchestrator | 2026-01-03 00:57:33.093966 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-03 00:57:33.093973 | orchestrator | Saturday 03 January 2026 00:51:27 +0000 (0:00:00.417) 0:05:06.960 ****** 2026-01-03 00:57:33.093992 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.094056 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.094063 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.094070 | orchestrator | 2026-01-03 00:57:33.094076 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-03 00:57:33.094083 | orchestrator | Saturday 03 January 2026 00:51:28 +0000 (0:00:00.261) 0:05:07.222 ****** 2026-01-03 00:57:33.094090 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.094097 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.094104 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.094111 | orchestrator | 2026-01-03 00:57:33.094117 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-03 00:57:33.094124 | orchestrator | Saturday 03 January 2026 00:51:28 +0000 (0:00:00.258) 0:05:07.481 ****** 2026-01-03 00:57:33.094131 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.094138 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.094144 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.094151 | orchestrator | 2026-01-03 00:57:33.094157 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-03 00:57:33.094164 | orchestrator | Saturday 03 January 2026 00:51:28 +0000 (0:00:00.256) 0:05:07.737 ****** 2026-01-03 00:57:33.094171 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.094177 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.094184 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.094191 | orchestrator | 2026-01-03 00:57:33.094198 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-03 00:57:33.094204 | orchestrator | Saturday 03 January 2026 00:51:29 +0000 (0:00:00.505) 0:05:08.243 ****** 2026-01-03 00:57:33.094211 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.094218 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.094224 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.094231 | orchestrator | 2026-01-03 00:57:33.094238 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-03 00:57:33.094244 | orchestrator | Saturday 03 January 2026 00:51:29 +0000 (0:00:00.317) 0:05:08.561 ****** 2026-01-03 00:57:33.094251 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.094258 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.094265 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.094272 | orchestrator | 2026-01-03 00:57:33.094279 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-03 00:57:33.094290 | orchestrator | Saturday 03 January 2026 00:51:29 +0000 (0:00:00.298) 0:05:08.859 ****** 2026-01-03 00:57:33.094296 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.094303 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.094310 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.094316 | orchestrator | 2026-01-03 00:57:33.094323 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-03 00:57:33.094329 | orchestrator | Saturday 03 January 2026 00:51:30 +0000 (0:00:00.736) 0:05:09.596 ****** 2026-01-03 00:57:33.094335 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-03 00:57:33.094342 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-03 00:57:33.094348 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-03 00:57:33.094354 | orchestrator | 2026-01-03 00:57:33.094361 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-03 00:57:33.094366 | orchestrator | Saturday 03 January 2026 00:51:31 +0000 (0:00:00.747) 0:05:10.343 ****** 2026-01-03 00:57:33.094376 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.094382 | orchestrator | 2026-01-03 00:57:33.094388 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-03 00:57:33.094393 | orchestrator | Saturday 03 January 2026 00:51:31 +0000 (0:00:00.528) 0:05:10.872 ****** 2026-01-03 00:57:33.094399 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.094404 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.094409 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.094416 | orchestrator | 2026-01-03 00:57:33.094421 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-03 00:57:33.094427 | orchestrator | Saturday 03 January 2026 00:51:32 +0000 (0:00:00.707) 0:05:11.580 ****** 2026-01-03 00:57:33.094432 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.094438 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.094444 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.094449 | orchestrator | 2026-01-03 00:57:33.094455 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-03 00:57:33.094461 | orchestrator | Saturday 03 January 2026 00:51:33 +0000 (0:00:00.628) 0:05:12.208 ****** 2026-01-03 00:57:33.094467 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-03 00:57:33.094473 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-03 00:57:33.094479 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-03 00:57:33.094485 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-01-03 00:57:33.094492 | orchestrator | 2026-01-03 00:57:33.094497 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-03 00:57:33.094503 | orchestrator | Saturday 03 January 2026 00:51:43 +0000 (0:00:10.605) 0:05:22.813 ****** 2026-01-03 00:57:33.094509 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.094515 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.094521 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.094527 | orchestrator | 2026-01-03 00:57:33.094533 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-03 00:57:33.094540 | orchestrator | Saturday 03 January 2026 00:51:44 +0000 (0:00:00.361) 0:05:23.174 ****** 2026-01-03 00:57:33.094546 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-03 00:57:33.094552 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-03 00:57:33.094559 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-03 00:57:33.094565 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-03 00:57:33.094572 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:57:33.094604 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:57:33.094611 | orchestrator | 2026-01-03 00:57:33.094623 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-03 00:57:33.094629 | orchestrator | Saturday 03 January 2026 00:51:46 +0000 (0:00:02.379) 0:05:25.554 ****** 2026-01-03 00:57:33.094635 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-03 00:57:33.094641 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-03 00:57:33.094647 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-03 00:57:33.094653 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-03 00:57:33.094659 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-03 00:57:33.094665 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-03 00:57:33.094671 | orchestrator | 2026-01-03 00:57:33.094677 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-03 00:57:33.094683 | orchestrator | Saturday 03 January 2026 00:51:47 +0000 (0:00:01.469) 0:05:27.023 ****** 2026-01-03 00:57:33.094689 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.094695 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.094700 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.094706 | orchestrator | 2026-01-03 00:57:33.094712 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-03 00:57:33.094718 | orchestrator | Saturday 03 January 2026 00:51:48 +0000 (0:00:01.067) 0:05:28.091 ****** 2026-01-03 00:57:33.094723 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.094730 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.094736 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.094742 | orchestrator | 2026-01-03 00:57:33.094748 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-03 00:57:33.094754 | orchestrator | Saturday 03 January 2026 00:51:49 +0000 (0:00:00.366) 0:05:28.458 ****** 2026-01-03 00:57:33.094760 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.094766 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.094771 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.094777 | orchestrator | 2026-01-03 00:57:33.094783 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-03 00:57:33.094790 | orchestrator | Saturday 03 January 2026 00:51:49 +0000 (0:00:00.322) 0:05:28.781 ****** 2026-01-03 00:57:33.094796 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.094802 | orchestrator | 2026-01-03 00:57:33.094809 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-03 00:57:33.094815 | orchestrator | Saturday 03 January 2026 00:51:50 +0000 (0:00:00.804) 0:05:29.586 ****** 2026-01-03 00:57:33.094822 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.094828 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.094835 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.094841 | orchestrator | 2026-01-03 00:57:33.094848 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-03 00:57:33.094855 | orchestrator | Saturday 03 January 2026 00:51:50 +0000 (0:00:00.358) 0:05:29.944 ****** 2026-01-03 00:57:33.094861 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.094868 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.094874 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.094881 | orchestrator | 2026-01-03 00:57:33.094887 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-03 00:57:33.094893 | orchestrator | Saturday 03 January 2026 00:51:51 +0000 (0:00:00.326) 0:05:30.271 ****** 2026-01-03 00:57:33.094903 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-01-03 00:57:33.094910 | orchestrator | 2026-01-03 00:57:33.094916 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-03 00:57:33.094923 | orchestrator | Saturday 03 January 2026 00:51:51 +0000 (0:00:00.856) 0:05:31.128 ****** 2026-01-03 00:57:33.094929 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.094936 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.094951 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.094957 | orchestrator | 2026-01-03 00:57:33.094964 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-03 00:57:33.094971 | orchestrator | Saturday 03 January 2026 00:51:53 +0000 (0:00:01.598) 0:05:32.727 ****** 2026-01-03 00:57:33.094977 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.094984 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.094991 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.095015 | orchestrator | 2026-01-03 00:57:33.095022 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-03 00:57:33.095029 | orchestrator | Saturday 03 January 2026 00:51:54 +0000 (0:00:01.336) 0:05:34.063 ****** 2026-01-03 00:57:33.095035 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.095042 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.095047 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.095054 | orchestrator | 2026-01-03 00:57:33.095060 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-03 00:57:33.095066 | orchestrator | Saturday 03 January 2026 00:51:57 +0000 (0:00:02.167) 0:05:36.231 ****** 2026-01-03 00:57:33.095072 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.095078 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.095085 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.095090 | orchestrator | 2026-01-03 00:57:33.095097 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-03 00:57:33.095104 | orchestrator | Saturday 03 January 2026 00:51:58 +0000 (0:00:01.691) 0:05:37.923 ****** 2026-01-03 00:57:33.095110 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.095117 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.095124 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-01-03 00:57:33.095130 | orchestrator | 2026-01-03 00:57:33.095137 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-01-03 00:57:33.095144 | orchestrator | Saturday 03 January 2026 00:51:59 +0000 (0:00:00.690) 0:05:38.614 ****** 2026-01-03 00:57:33.095185 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-01-03 00:57:33.095192 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-01-03 00:57:33.095199 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-01-03 00:57:33.095206 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-01-03 00:57:33.095212 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-01-03 00:57:33.095219 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-01-03 00:57:33.095225 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-03 00:57:33.095232 | orchestrator | 2026-01-03 00:57:33.095239 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-01-03 00:57:33.095245 | orchestrator | Saturday 03 January 2026 00:52:35 +0000 (0:00:35.838) 0:06:14.452 ****** 2026-01-03 00:57:33.095252 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-03 00:57:33.095259 | orchestrator | 2026-01-03 00:57:33.095265 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-01-03 00:57:33.095272 | orchestrator | Saturday 03 January 2026 00:52:36 +0000 (0:00:01.194) 0:06:15.646 ****** 2026-01-03 00:57:33.095279 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.095285 | orchestrator | 2026-01-03 00:57:33.095292 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-01-03 00:57:33.095298 | orchestrator | Saturday 03 January 2026 00:52:36 +0000 (0:00:00.262) 0:06:15.908 ****** 2026-01-03 00:57:33.095305 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.095316 | orchestrator | 2026-01-03 00:57:33.095323 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-01-03 00:57:33.095329 | orchestrator | Saturday 03 January 2026 00:52:36 +0000 (0:00:00.123) 0:06:16.032 ****** 2026-01-03 00:57:33.095336 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-01-03 00:57:33.095342 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-01-03 00:57:33.095349 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-01-03 00:57:33.095355 | orchestrator | 2026-01-03 00:57:33.095362 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-01-03 00:57:33.095369 | orchestrator | Saturday 03 January 2026 00:52:43 +0000 (0:00:06.285) 0:06:22.317 ****** 2026-01-03 00:57:33.095375 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-01-03 00:57:33.095382 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-01-03 00:57:33.095388 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-01-03 00:57:33.095395 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-01-03 00:57:33.095402 | orchestrator | 2026-01-03 00:57:33.095409 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-03 00:57:33.095415 | orchestrator | Saturday 03 January 2026 00:52:48 +0000 (0:00:05.561) 0:06:27.878 ****** 2026-01-03 00:57:33.095422 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.095431 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.095438 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.095445 | orchestrator | 2026-01-03 00:57:33.095451 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-03 00:57:33.095458 | orchestrator | Saturday 03 January 2026 00:52:49 +0000 (0:00:00.801) 0:06:28.679 ****** 2026-01-03 00:57:33.095465 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.095471 | orchestrator | 2026-01-03 00:57:33.095478 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-03 00:57:33.095484 | orchestrator | Saturday 03 January 2026 00:52:50 +0000 (0:00:00.879) 0:06:29.559 ****** 2026-01-03 00:57:33.095491 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.095497 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.095504 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.095511 | orchestrator | 2026-01-03 00:57:33.095517 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-03 00:57:33.095524 | orchestrator | Saturday 03 January 2026 00:52:50 +0000 (0:00:00.390) 0:06:29.949 ****** 2026-01-03 00:57:33.095530 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.095537 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.095544 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.095550 | orchestrator | 2026-01-03 00:57:33.095557 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-03 00:57:33.095563 | orchestrator | Saturday 03 January 2026 00:52:52 +0000 (0:00:01.368) 0:06:31.317 ****** 2026-01-03 00:57:33.095570 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-03 00:57:33.095577 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-03 00:57:33.095583 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-03 00:57:33.095590 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.095596 | orchestrator | 2026-01-03 00:57:33.095603 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-03 00:57:33.095609 | orchestrator | Saturday 03 January 2026 00:52:52 +0000 (0:00:00.609) 0:06:31.926 ****** 2026-01-03 00:57:33.095616 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.095622 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.095629 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.095635 | orchestrator | 2026-01-03 00:57:33.095642 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-01-03 00:57:33.095652 | orchestrator | 2026-01-03 00:57:33.095659 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-03 00:57:33.095679 | orchestrator | Saturday 03 January 2026 00:52:53 +0000 (0:00:00.800) 0:06:32.727 ****** 2026-01-03 00:57:33.095685 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.095692 | orchestrator | 2026-01-03 00:57:33.095699 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-03 00:57:33.095705 | orchestrator | Saturday 03 January 2026 00:52:54 +0000 (0:00:00.500) 0:06:33.227 ****** 2026-01-03 00:57:33.095711 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.095717 | orchestrator | 2026-01-03 00:57:33.095724 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-03 00:57:33.095729 | orchestrator | Saturday 03 January 2026 00:52:54 +0000 (0:00:00.721) 0:06:33.949 ****** 2026-01-03 00:57:33.095735 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.095741 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.095747 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.095753 | orchestrator | 2026-01-03 00:57:33.095760 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-03 00:57:33.095766 | orchestrator | Saturday 03 January 2026 00:52:55 +0000 (0:00:00.312) 0:06:34.262 ****** 2026-01-03 00:57:33.095772 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.095779 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.095784 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.095791 | orchestrator | 2026-01-03 00:57:33.095797 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-03 00:57:33.095803 | orchestrator | Saturday 03 January 2026 00:52:55 +0000 (0:00:00.736) 0:06:34.998 ****** 2026-01-03 00:57:33.095810 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.095816 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.095822 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.095829 | orchestrator | 2026-01-03 00:57:33.095835 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-03 00:57:33.095841 | orchestrator | Saturday 03 January 2026 00:52:56 +0000 (0:00:00.715) 0:06:35.714 ****** 2026-01-03 00:57:33.095847 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.095854 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.095860 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.095866 | orchestrator | 2026-01-03 00:57:33.095873 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-03 00:57:33.095879 | orchestrator | Saturday 03 January 2026 00:52:57 +0000 (0:00:01.010) 0:06:36.725 ****** 2026-01-03 00:57:33.095885 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.095892 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.095898 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.095904 | orchestrator | 2026-01-03 00:57:33.095910 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-03 00:57:33.095917 | orchestrator | Saturday 03 January 2026 00:52:57 +0000 (0:00:00.313) 0:06:37.038 ****** 2026-01-03 00:57:33.095923 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.095930 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.095936 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.095942 | orchestrator | 2026-01-03 00:57:33.095948 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-03 00:57:33.095955 | orchestrator | Saturday 03 January 2026 00:52:58 +0000 (0:00:00.314) 0:06:37.353 ****** 2026-01-03 00:57:33.095961 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.095967 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.095976 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.095983 | orchestrator | 2026-01-03 00:57:33.095989 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-03 00:57:33.096013 | orchestrator | Saturday 03 January 2026 00:52:58 +0000 (0:00:00.336) 0:06:37.689 ****** 2026-01-03 00:57:33.096020 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.096027 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.096033 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.096039 | orchestrator | 2026-01-03 00:57:33.096046 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-03 00:57:33.096052 | orchestrator | Saturday 03 January 2026 00:52:59 +0000 (0:00:01.045) 0:06:38.735 ****** 2026-01-03 00:57:33.096059 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.096065 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.096072 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.096078 | orchestrator | 2026-01-03 00:57:33.096084 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-03 00:57:33.096091 | orchestrator | Saturday 03 January 2026 00:53:00 +0000 (0:00:00.835) 0:06:39.570 ****** 2026-01-03 00:57:33.096097 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.096104 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.096110 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.096117 | orchestrator | 2026-01-03 00:57:33.096123 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-03 00:57:33.096129 | orchestrator | Saturday 03 January 2026 00:53:00 +0000 (0:00:00.360) 0:06:39.931 ****** 2026-01-03 00:57:33.096136 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.096142 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.096149 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.096155 | orchestrator | 2026-01-03 00:57:33.096161 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-03 00:57:33.096167 | orchestrator | Saturday 03 January 2026 00:53:01 +0000 (0:00:00.315) 0:06:40.246 ****** 2026-01-03 00:57:33.096173 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.096180 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.096186 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.096192 | orchestrator | 2026-01-03 00:57:33.096199 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-03 00:57:33.096205 | orchestrator | Saturday 03 January 2026 00:53:01 +0000 (0:00:00.639) 0:06:40.885 ****** 2026-01-03 00:57:33.096212 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.096218 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.096224 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.096231 | orchestrator | 2026-01-03 00:57:33.096237 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-03 00:57:33.096258 | orchestrator | Saturday 03 January 2026 00:53:02 +0000 (0:00:00.358) 0:06:41.244 ****** 2026-01-03 00:57:33.096264 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.096270 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.096276 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.096282 | orchestrator | 2026-01-03 00:57:33.096289 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-03 00:57:33.096294 | orchestrator | Saturday 03 January 2026 00:53:02 +0000 (0:00:00.377) 0:06:41.622 ****** 2026-01-03 00:57:33.096300 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.096306 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.096312 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.096318 | orchestrator | 2026-01-03 00:57:33.096324 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-03 00:57:33.096331 | orchestrator | Saturday 03 January 2026 00:53:02 +0000 (0:00:00.336) 0:06:41.959 ****** 2026-01-03 00:57:33.096337 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.096342 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.096348 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.096354 | orchestrator | 2026-01-03 00:57:33.096360 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-03 00:57:33.096365 | orchestrator | Saturday 03 January 2026 00:53:03 +0000 (0:00:00.680) 0:06:42.639 ****** 2026-01-03 00:57:33.096376 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.096382 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.096388 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.096394 | orchestrator | 2026-01-03 00:57:33.096402 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-03 00:57:33.096410 | orchestrator | Saturday 03 January 2026 00:53:03 +0000 (0:00:00.347) 0:06:42.986 ****** 2026-01-03 00:57:33.096416 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.096421 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.096427 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.096433 | orchestrator | 2026-01-03 00:57:33.096439 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-03 00:57:33.096445 | orchestrator | Saturday 03 January 2026 00:53:04 +0000 (0:00:00.363) 0:06:43.350 ****** 2026-01-03 00:57:33.096450 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.096456 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.096462 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.096468 | orchestrator | 2026-01-03 00:57:33.096475 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-03 00:57:33.096481 | orchestrator | Saturday 03 January 2026 00:53:04 +0000 (0:00:00.550) 0:06:43.900 ****** 2026-01-03 00:57:33.096487 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.096493 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.096499 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.096505 | orchestrator | 2026-01-03 00:57:33.096511 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-03 00:57:33.096518 | orchestrator | Saturday 03 January 2026 00:53:05 +0000 (0:00:00.647) 0:06:44.547 ****** 2026-01-03 00:57:33.096524 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-03 00:57:33.096530 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-03 00:57:33.096537 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-03 00:57:33.096544 | orchestrator | 2026-01-03 00:57:33.096551 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-03 00:57:33.096557 | orchestrator | Saturday 03 January 2026 00:53:06 +0000 (0:00:00.646) 0:06:45.193 ****** 2026-01-03 00:57:33.096567 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.096574 | orchestrator | 2026-01-03 00:57:33.096580 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-03 00:57:33.096587 | orchestrator | Saturday 03 January 2026 00:53:06 +0000 (0:00:00.515) 0:06:45.709 ****** 2026-01-03 00:57:33.096592 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.096598 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.096604 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.096610 | orchestrator | 2026-01-03 00:57:33.096615 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-03 00:57:33.096621 | orchestrator | Saturday 03 January 2026 00:53:07 +0000 (0:00:00.655) 0:06:46.365 ****** 2026-01-03 00:57:33.096627 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.096633 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.096639 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.096646 | orchestrator | 2026-01-03 00:57:33.096650 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-03 00:57:33.096654 | orchestrator | Saturday 03 January 2026 00:53:07 +0000 (0:00:00.341) 0:06:46.706 ****** 2026-01-03 00:57:33.096658 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.096662 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.096666 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.096670 | orchestrator | 2026-01-03 00:57:33.096673 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-03 00:57:33.096677 | orchestrator | Saturday 03 January 2026 00:53:08 +0000 (0:00:00.734) 0:06:47.441 ****** 2026-01-03 00:57:33.096686 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.096690 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.096693 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.096697 | orchestrator | 2026-01-03 00:57:33.096701 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-03 00:57:33.096705 | orchestrator | Saturday 03 January 2026 00:53:08 +0000 (0:00:00.452) 0:06:47.893 ****** 2026-01-03 00:57:33.096709 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-03 00:57:33.096716 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-03 00:57:33.096722 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-03 00:57:33.096729 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-03 00:57:33.096740 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-03 00:57:33.096747 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-03 00:57:33.096753 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-03 00:57:33.096759 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-03 00:57:33.096765 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-03 00:57:33.096772 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-03 00:57:33.096778 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-03 00:57:33.096784 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-03 00:57:33.096791 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-03 00:57:33.096797 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-03 00:57:33.096803 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-03 00:57:33.096810 | orchestrator | 2026-01-03 00:57:33.096817 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-03 00:57:33.096824 | orchestrator | Saturday 03 January 2026 00:53:13 +0000 (0:00:04.667) 0:06:52.561 ****** 2026-01-03 00:57:33.096828 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.096832 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.096839 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.096844 | orchestrator | 2026-01-03 00:57:33.096848 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-03 00:57:33.096852 | orchestrator | Saturday 03 January 2026 00:53:13 +0000 (0:00:00.358) 0:06:52.920 ****** 2026-01-03 00:57:33.096855 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.096860 | orchestrator | 2026-01-03 00:57:33.096867 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-03 00:57:33.096873 | orchestrator | Saturday 03 January 2026 00:53:14 +0000 (0:00:00.547) 0:06:53.468 ****** 2026-01-03 00:57:33.096879 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-03 00:57:33.096886 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-03 00:57:33.096891 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-03 00:57:33.096897 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-01-03 00:57:33.096904 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-01-03 00:57:33.096910 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-01-03 00:57:33.096916 | orchestrator | 2026-01-03 00:57:33.096921 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-03 00:57:33.096931 | orchestrator | Saturday 03 January 2026 00:53:15 +0000 (0:00:01.454) 0:06:54.922 ****** 2026-01-03 00:57:33.096936 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:57:33.096946 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-03 00:57:33.096952 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-03 00:57:33.096958 | orchestrator | 2026-01-03 00:57:33.096963 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-03 00:57:33.096969 | orchestrator | Saturday 03 January 2026 00:53:18 +0000 (0:00:02.557) 0:06:57.479 ****** 2026-01-03 00:57:33.096975 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-03 00:57:33.096981 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-03 00:57:33.096987 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.096993 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-03 00:57:33.097033 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-03 00:57:33.097040 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.097047 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-03 00:57:33.097053 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-03 00:57:33.097059 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.097066 | orchestrator | 2026-01-03 00:57:33.097072 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-03 00:57:33.097078 | orchestrator | Saturday 03 January 2026 00:53:19 +0000 (0:00:01.237) 0:06:58.717 ****** 2026-01-03 00:57:33.097084 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-03 00:57:33.097091 | orchestrator | 2026-01-03 00:57:33.097097 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-03 00:57:33.097103 | orchestrator | Saturday 03 January 2026 00:53:21 +0000 (0:00:02.221) 0:07:00.938 ****** 2026-01-03 00:57:33.097110 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.097117 | orchestrator | 2026-01-03 00:57:33.097123 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-01-03 00:57:33.097129 | orchestrator | Saturday 03 January 2026 00:53:22 +0000 (0:00:00.563) 0:07:01.501 ****** 2026-01-03 00:57:33.097136 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f97db499-9f50-5724-b4de-324784fab4ab', 'data_vg': 'ceph-f97db499-9f50-5724-b4de-324784fab4ab'}) 2026-01-03 00:57:33.097143 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-124077fc-a709-5275-a3b4-8defea20aa20', 'data_vg': 'ceph-124077fc-a709-5275-a3b4-8defea20aa20'}) 2026-01-03 00:57:33.097160 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-147f94e4-6564-5421-8ac2-dc0697a6d722', 'data_vg': 'ceph-147f94e4-6564-5421-8ac2-dc0697a6d722'}) 2026-01-03 00:57:33.097165 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-293f14c0-405b-5b3a-a5c8-f3b182003048', 'data_vg': 'ceph-293f14c0-405b-5b3a-a5c8-f3b182003048'}) 2026-01-03 00:57:33.097169 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-43909478-d18c-58e7-896e-8d0e3e550915', 'data_vg': 'ceph-43909478-d18c-58e7-896e-8d0e3e550915'}) 2026-01-03 00:57:33.097172 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-43153f84-c643-5017-9328-2bdcf330b780', 'data_vg': 'ceph-43153f84-c643-5017-9328-2bdcf330b780'}) 2026-01-03 00:57:33.097176 | orchestrator | 2026-01-03 00:57:33.097180 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-03 00:57:33.097184 | orchestrator | Saturday 03 January 2026 00:54:07 +0000 (0:00:44.993) 0:07:46.495 ****** 2026-01-03 00:57:33.097188 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.097192 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.097195 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.097200 | orchestrator | 2026-01-03 00:57:33.097207 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-03 00:57:33.097213 | orchestrator | Saturday 03 January 2026 00:54:07 +0000 (0:00:00.313) 0:07:46.809 ****** 2026-01-03 00:57:33.097225 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.097232 | orchestrator | 2026-01-03 00:57:33.097238 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-03 00:57:33.097245 | orchestrator | Saturday 03 January 2026 00:54:08 +0000 (0:00:00.510) 0:07:47.319 ****** 2026-01-03 00:57:33.097251 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.097257 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.097263 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.097270 | orchestrator | 2026-01-03 00:57:33.097276 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-03 00:57:33.097282 | orchestrator | Saturday 03 January 2026 00:54:09 +0000 (0:00:01.045) 0:07:48.365 ****** 2026-01-03 00:57:33.097289 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.097295 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.097301 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.097307 | orchestrator | 2026-01-03 00:57:33.097314 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-03 00:57:33.097321 | orchestrator | Saturday 03 January 2026 00:54:11 +0000 (0:00:02.696) 0:07:51.062 ****** 2026-01-03 00:57:33.097327 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.097333 | orchestrator | 2026-01-03 00:57:33.097340 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-03 00:57:33.097346 | orchestrator | Saturday 03 January 2026 00:54:12 +0000 (0:00:00.532) 0:07:51.594 ****** 2026-01-03 00:57:33.097352 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.097359 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.097365 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.097371 | orchestrator | 2026-01-03 00:57:33.097377 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-03 00:57:33.097389 | orchestrator | Saturday 03 January 2026 00:54:14 +0000 (0:00:01.736) 0:07:53.331 ****** 2026-01-03 00:57:33.097396 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.097402 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.097409 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.097416 | orchestrator | 2026-01-03 00:57:33.097422 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-03 00:57:33.097429 | orchestrator | Saturday 03 January 2026 00:54:15 +0000 (0:00:01.247) 0:07:54.578 ****** 2026-01-03 00:57:33.097435 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.097441 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.097448 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.097454 | orchestrator | 2026-01-03 00:57:33.097460 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-03 00:57:33.097466 | orchestrator | Saturday 03 January 2026 00:54:17 +0000 (0:00:01.845) 0:07:56.424 ****** 2026-01-03 00:57:33.097473 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.097479 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.097486 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.097492 | orchestrator | 2026-01-03 00:57:33.097498 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-03 00:57:33.097505 | orchestrator | Saturday 03 January 2026 00:54:17 +0000 (0:00:00.333) 0:07:56.758 ****** 2026-01-03 00:57:33.097512 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.097518 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.097525 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.097531 | orchestrator | 2026-01-03 00:57:33.097536 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-03 00:57:33.097540 | orchestrator | Saturday 03 January 2026 00:54:18 +0000 (0:00:00.657) 0:07:57.416 ****** 2026-01-03 00:57:33.097547 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-01-03 00:57:33.097552 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-01-03 00:57:33.097559 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-01-03 00:57:33.097563 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-03 00:57:33.097569 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-01-03 00:57:33.097576 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-01-03 00:57:33.097579 | orchestrator | 2026-01-03 00:57:33.097584 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-03 00:57:33.097591 | orchestrator | Saturday 03 January 2026 00:54:19 +0000 (0:00:01.088) 0:07:58.504 ****** 2026-01-03 00:57:33.097595 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-01-03 00:57:33.097599 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-01-03 00:57:33.097603 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-01-03 00:57:33.097611 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-03 00:57:33.097617 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-01-03 00:57:33.097622 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-01-03 00:57:33.097626 | orchestrator | 2026-01-03 00:57:33.097631 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-03 00:57:33.097638 | orchestrator | Saturday 03 January 2026 00:54:21 +0000 (0:00:02.319) 0:08:00.824 ****** 2026-01-03 00:57:33.097642 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-01-03 00:57:33.097646 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-01-03 00:57:33.097649 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-01-03 00:57:33.097653 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-01-03 00:57:33.097657 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-01-03 00:57:33.097662 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-03 00:57:33.097668 | orchestrator | 2026-01-03 00:57:33.097672 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-03 00:57:33.097676 | orchestrator | Saturday 03 January 2026 00:54:25 +0000 (0:00:03.790) 0:08:04.614 ****** 2026-01-03 00:57:33.097680 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.097684 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.097687 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-03 00:57:33.097691 | orchestrator | 2026-01-03 00:57:33.097695 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-03 00:57:33.097699 | orchestrator | Saturday 03 January 2026 00:54:28 +0000 (0:00:02.683) 0:08:07.297 ****** 2026-01-03 00:57:33.097702 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.097706 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.097710 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-01-03 00:57:33.097714 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-03 00:57:33.097718 | orchestrator | 2026-01-03 00:57:33.097721 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-03 00:57:33.097725 | orchestrator | Saturday 03 January 2026 00:54:40 +0000 (0:00:12.655) 0:08:19.953 ****** 2026-01-03 00:57:33.097729 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.097733 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.097737 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.097740 | orchestrator | 2026-01-03 00:57:33.097744 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-03 00:57:33.097748 | orchestrator | Saturday 03 January 2026 00:54:42 +0000 (0:00:01.229) 0:08:21.183 ****** 2026-01-03 00:57:33.097752 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.097757 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.097763 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.097770 | orchestrator | 2026-01-03 00:57:33.097776 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-03 00:57:33.097783 | orchestrator | Saturday 03 January 2026 00:54:42 +0000 (0:00:00.403) 0:08:21.586 ****** 2026-01-03 00:57:33.097789 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.097799 | orchestrator | 2026-01-03 00:57:33.097805 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-03 00:57:33.097811 | orchestrator | Saturday 03 January 2026 00:54:42 +0000 (0:00:00.528) 0:08:22.115 ****** 2026-01-03 00:57:33.097823 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:57:33.097830 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:57:33.097836 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:57:33.097842 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.097848 | orchestrator | 2026-01-03 00:57:33.097854 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-03 00:57:33.097860 | orchestrator | Saturday 03 January 2026 00:54:44 +0000 (0:00:01.077) 0:08:23.193 ****** 2026-01-03 00:57:33.097867 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.097873 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.097879 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.097885 | orchestrator | 2026-01-03 00:57:33.097892 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-03 00:57:33.097898 | orchestrator | Saturday 03 January 2026 00:54:44 +0000 (0:00:00.352) 0:08:23.546 ****** 2026-01-03 00:57:33.097905 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.097911 | orchestrator | 2026-01-03 00:57:33.097918 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-03 00:57:33.097924 | orchestrator | Saturday 03 January 2026 00:54:44 +0000 (0:00:00.203) 0:08:23.749 ****** 2026-01-03 00:57:33.097930 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.097937 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.097943 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.097949 | orchestrator | 2026-01-03 00:57:33.097956 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-03 00:57:33.097962 | orchestrator | Saturday 03 January 2026 00:54:44 +0000 (0:00:00.330) 0:08:24.079 ****** 2026-01-03 00:57:33.097969 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.097975 | orchestrator | 2026-01-03 00:57:33.097982 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-03 00:57:33.097989 | orchestrator | Saturday 03 January 2026 00:54:45 +0000 (0:00:00.258) 0:08:24.338 ****** 2026-01-03 00:57:33.098008 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.098044 | orchestrator | 2026-01-03 00:57:33.098048 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-03 00:57:33.098052 | orchestrator | Saturday 03 January 2026 00:54:45 +0000 (0:00:00.242) 0:08:24.581 ****** 2026-01-03 00:57:33.098056 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.098059 | orchestrator | 2026-01-03 00:57:33.098063 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-03 00:57:33.098067 | orchestrator | Saturday 03 January 2026 00:54:45 +0000 (0:00:00.138) 0:08:24.719 ****** 2026-01-03 00:57:33.098077 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.098084 | orchestrator | 2026-01-03 00:57:33.098090 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-03 00:57:33.098097 | orchestrator | Saturday 03 January 2026 00:54:45 +0000 (0:00:00.225) 0:08:24.945 ****** 2026-01-03 00:57:33.098103 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.098110 | orchestrator | 2026-01-03 00:57:33.098117 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-03 00:57:33.098123 | orchestrator | Saturday 03 January 2026 00:54:46 +0000 (0:00:00.893) 0:08:25.839 ****** 2026-01-03 00:57:33.098130 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:57:33.098136 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:57:33.098143 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:57:33.098149 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.098156 | orchestrator | 2026-01-03 00:57:33.098168 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-03 00:57:33.098174 | orchestrator | Saturday 03 January 2026 00:54:47 +0000 (0:00:00.408) 0:08:26.247 ****** 2026-01-03 00:57:33.098181 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.098187 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.098194 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.098201 | orchestrator | 2026-01-03 00:57:33.098207 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-03 00:57:33.098213 | orchestrator | Saturday 03 January 2026 00:54:47 +0000 (0:00:00.325) 0:08:26.573 ****** 2026-01-03 00:57:33.098219 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.098225 | orchestrator | 2026-01-03 00:57:33.098231 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-03 00:57:33.098237 | orchestrator | Saturday 03 January 2026 00:54:47 +0000 (0:00:00.220) 0:08:26.793 ****** 2026-01-03 00:57:33.098242 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.098249 | orchestrator | 2026-01-03 00:57:33.098255 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-01-03 00:57:33.098261 | orchestrator | 2026-01-03 00:57:33.098267 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-03 00:57:33.098272 | orchestrator | Saturday 03 January 2026 00:54:48 +0000 (0:00:00.668) 0:08:27.462 ****** 2026-01-03 00:57:33.098279 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.098286 | orchestrator | 2026-01-03 00:57:33.098292 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-03 00:57:33.098298 | orchestrator | Saturday 03 January 2026 00:54:49 +0000 (0:00:01.262) 0:08:28.725 ****** 2026-01-03 00:57:33.098305 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.098311 | orchestrator | 2026-01-03 00:57:33.098318 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-03 00:57:33.098324 | orchestrator | Saturday 03 January 2026 00:54:50 +0000 (0:00:01.281) 0:08:30.006 ****** 2026-01-03 00:57:33.098331 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.098337 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.098343 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.098354 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.098361 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.098367 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.098373 | orchestrator | 2026-01-03 00:57:33.098379 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-03 00:57:33.098385 | orchestrator | Saturday 03 January 2026 00:54:52 +0000 (0:00:01.306) 0:08:31.313 ****** 2026-01-03 00:57:33.098391 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.098397 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.098403 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.098421 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.098428 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.098435 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.098441 | orchestrator | 2026-01-03 00:57:33.098448 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-03 00:57:33.098455 | orchestrator | Saturday 03 January 2026 00:54:53 +0000 (0:00:00.845) 0:08:32.159 ****** 2026-01-03 00:57:33.098461 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.098467 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.098473 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.098479 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.098485 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.098491 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.098497 | orchestrator | 2026-01-03 00:57:33.098503 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-03 00:57:33.098518 | orchestrator | Saturday 03 January 2026 00:54:54 +0000 (0:00:01.003) 0:08:33.162 ****** 2026-01-03 00:57:33.098522 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.098526 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.098529 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.098533 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.098537 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.098541 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.098545 | orchestrator | 2026-01-03 00:57:33.098548 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-03 00:57:33.098552 | orchestrator | Saturday 03 January 2026 00:54:54 +0000 (0:00:00.641) 0:08:33.803 ****** 2026-01-03 00:57:33.098556 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.098560 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.098563 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.098567 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.098571 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.098575 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.098579 | orchestrator | 2026-01-03 00:57:33.098583 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-03 00:57:33.098587 | orchestrator | Saturday 03 January 2026 00:54:55 +0000 (0:00:01.280) 0:08:35.084 ****** 2026-01-03 00:57:33.098596 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.098600 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.098605 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.098611 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.098617 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.098624 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.098629 | orchestrator | 2026-01-03 00:57:33.098635 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-03 00:57:33.098643 | orchestrator | Saturday 03 January 2026 00:54:56 +0000 (0:00:00.596) 0:08:35.680 ****** 2026-01-03 00:57:33.098653 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.098659 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.098665 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.098671 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.098677 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.098682 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.098687 | orchestrator | 2026-01-03 00:57:33.098693 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-03 00:57:33.098699 | orchestrator | Saturday 03 January 2026 00:54:57 +0000 (0:00:00.827) 0:08:36.507 ****** 2026-01-03 00:57:33.098705 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.098710 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.098716 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.098722 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.098728 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.098734 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.098739 | orchestrator | 2026-01-03 00:57:33.098746 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-03 00:57:33.098755 | orchestrator | Saturday 03 January 2026 00:54:58 +0000 (0:00:01.012) 0:08:37.520 ****** 2026-01-03 00:57:33.098763 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.098770 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.098776 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.098782 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.098788 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.098793 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.098799 | orchestrator | 2026-01-03 00:57:33.098806 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-03 00:57:33.098811 | orchestrator | Saturday 03 January 2026 00:54:59 +0000 (0:00:01.316) 0:08:38.836 ****** 2026-01-03 00:57:33.098817 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.098823 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.098835 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.098841 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.098847 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.098854 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.098859 | orchestrator | 2026-01-03 00:57:33.098867 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-03 00:57:33.098872 | orchestrator | Saturday 03 January 2026 00:55:00 +0000 (0:00:00.616) 0:08:39.453 ****** 2026-01-03 00:57:33.098876 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.098879 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.098883 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.098887 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.098891 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.098895 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.098898 | orchestrator | 2026-01-03 00:57:33.098902 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-03 00:57:33.098907 | orchestrator | Saturday 03 January 2026 00:55:01 +0000 (0:00:01.074) 0:08:40.528 ****** 2026-01-03 00:57:33.098913 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.098923 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.098930 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.098940 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.098946 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.098952 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.098957 | orchestrator | 2026-01-03 00:57:33.098962 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-03 00:57:33.098968 | orchestrator | Saturday 03 January 2026 00:55:02 +0000 (0:00:00.630) 0:08:41.158 ****** 2026-01-03 00:57:33.098973 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.098978 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.098984 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.098989 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.099007 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.099013 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.099019 | orchestrator | 2026-01-03 00:57:33.099025 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-03 00:57:33.099031 | orchestrator | Saturday 03 January 2026 00:55:02 +0000 (0:00:00.845) 0:08:42.004 ****** 2026-01-03 00:57:33.099036 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.099042 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.099048 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.099053 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.099059 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.099064 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.099070 | orchestrator | 2026-01-03 00:57:33.099076 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-03 00:57:33.099083 | orchestrator | Saturday 03 January 2026 00:55:03 +0000 (0:00:00.687) 0:08:42.691 ****** 2026-01-03 00:57:33.099088 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.099094 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.099100 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.099105 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.099111 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.099116 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.099122 | orchestrator | 2026-01-03 00:57:33.099127 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-03 00:57:33.099132 | orchestrator | Saturday 03 January 2026 00:55:04 +0000 (0:00:00.852) 0:08:43.544 ****** 2026-01-03 00:57:33.099138 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.099143 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.099148 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.099154 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:57:33.099159 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:57:33.099164 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:57:33.099176 | orchestrator | 2026-01-03 00:57:33.099182 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-03 00:57:33.099195 | orchestrator | Saturday 03 January 2026 00:55:04 +0000 (0:00:00.577) 0:08:44.121 ****** 2026-01-03 00:57:33.099202 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.099208 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.099214 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.099220 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.099226 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.099232 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.099237 | orchestrator | 2026-01-03 00:57:33.099243 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-03 00:57:33.099249 | orchestrator | Saturday 03 January 2026 00:55:05 +0000 (0:00:00.865) 0:08:44.986 ****** 2026-01-03 00:57:33.099254 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.099260 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.099266 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.099271 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.099278 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.099284 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.099289 | orchestrator | 2026-01-03 00:57:33.099295 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-03 00:57:33.099302 | orchestrator | Saturday 03 January 2026 00:55:06 +0000 (0:00:00.599) 0:08:45.586 ****** 2026-01-03 00:57:33.099308 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.099314 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.099320 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.099325 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.099331 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.099338 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.099343 | orchestrator | 2026-01-03 00:57:33.099349 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-01-03 00:57:33.099355 | orchestrator | Saturday 03 January 2026 00:55:07 +0000 (0:00:01.291) 0:08:46.877 ****** 2026-01-03 00:57:33.099361 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-03 00:57:33.099366 | orchestrator | 2026-01-03 00:57:33.099372 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-01-03 00:57:33.099378 | orchestrator | Saturday 03 January 2026 00:55:11 +0000 (0:00:03.907) 0:08:50.785 ****** 2026-01-03 00:57:33.099384 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-03 00:57:33.099390 | orchestrator | 2026-01-03 00:57:33.099396 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-01-03 00:57:33.099401 | orchestrator | Saturday 03 January 2026 00:55:13 +0000 (0:00:02.188) 0:08:52.973 ****** 2026-01-03 00:57:33.099407 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.099413 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.099418 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.099424 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.099429 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.099435 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.099440 | orchestrator | 2026-01-03 00:57:33.099446 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-01-03 00:57:33.099452 | orchestrator | Saturday 03 January 2026 00:55:15 +0000 (0:00:01.878) 0:08:54.851 ****** 2026-01-03 00:57:33.099458 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.099463 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.099468 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.099474 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.099479 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.099484 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.099491 | orchestrator | 2026-01-03 00:57:33.099497 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-01-03 00:57:33.099509 | orchestrator | Saturday 03 January 2026 00:55:16 +0000 (0:00:01.002) 0:08:55.854 ****** 2026-01-03 00:57:33.099521 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.099529 | orchestrator | 2026-01-03 00:57:33.099534 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-01-03 00:57:33.099540 | orchestrator | Saturday 03 January 2026 00:55:18 +0000 (0:00:01.302) 0:08:57.156 ****** 2026-01-03 00:57:33.099546 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.099552 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.099557 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.099562 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.099568 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.099573 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.099579 | orchestrator | 2026-01-03 00:57:33.099584 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-01-03 00:57:33.099590 | orchestrator | Saturday 03 January 2026 00:55:19 +0000 (0:00:01.797) 0:08:58.954 ****** 2026-01-03 00:57:33.099595 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.099601 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.099606 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.099612 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.099618 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.099624 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.099630 | orchestrator | 2026-01-03 00:57:33.099636 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-01-03 00:57:33.099643 | orchestrator | Saturday 03 January 2026 00:55:23 +0000 (0:00:03.671) 0:09:02.626 ****** 2026-01-03 00:57:33.099649 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:57:33.099655 | orchestrator | 2026-01-03 00:57:33.099661 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-01-03 00:57:33.099666 | orchestrator | Saturday 03 January 2026 00:55:24 +0000 (0:00:01.424) 0:09:04.050 ****** 2026-01-03 00:57:33.099672 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.099678 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.099684 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.099689 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.099695 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.099701 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.099706 | orchestrator | 2026-01-03 00:57:33.099712 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-01-03 00:57:33.099727 | orchestrator | Saturday 03 January 2026 00:55:26 +0000 (0:00:01.115) 0:09:05.166 ****** 2026-01-03 00:57:33.099734 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.099742 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.099747 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.099753 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:57:33.099759 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:57:33.099766 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:57:33.099772 | orchestrator | 2026-01-03 00:57:33.099778 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-01-03 00:57:33.099784 | orchestrator | Saturday 03 January 2026 00:55:28 +0000 (0:00:02.958) 0:09:08.124 ****** 2026-01-03 00:57:33.099790 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.099797 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.099803 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.099808 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:57:33.099814 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:57:33.099820 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:57:33.099826 | orchestrator | 2026-01-03 00:57:33.099832 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-01-03 00:57:33.099838 | orchestrator | 2026-01-03 00:57:33.099844 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-03 00:57:33.099857 | orchestrator | Saturday 03 January 2026 00:55:30 +0000 (0:00:01.140) 0:09:09.264 ****** 2026-01-03 00:57:33.099864 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.099871 | orchestrator | 2026-01-03 00:57:33.099878 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-03 00:57:33.099884 | orchestrator | Saturday 03 January 2026 00:55:30 +0000 (0:00:00.537) 0:09:09.801 ****** 2026-01-03 00:57:33.099891 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.099898 | orchestrator | 2026-01-03 00:57:33.099904 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-03 00:57:33.099910 | orchestrator | Saturday 03 January 2026 00:55:31 +0000 (0:00:00.762) 0:09:10.564 ****** 2026-01-03 00:57:33.099916 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.099921 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.099927 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.099933 | orchestrator | 2026-01-03 00:57:33.099940 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-03 00:57:33.099946 | orchestrator | Saturday 03 January 2026 00:55:31 +0000 (0:00:00.322) 0:09:10.887 ****** 2026-01-03 00:57:33.099952 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.099958 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.099964 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.099970 | orchestrator | 2026-01-03 00:57:33.099975 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-03 00:57:33.099982 | orchestrator | Saturday 03 January 2026 00:55:32 +0000 (0:00:00.653) 0:09:11.541 ****** 2026-01-03 00:57:33.099988 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.100085 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.100101 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.100108 | orchestrator | 2026-01-03 00:57:33.100114 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-03 00:57:33.100121 | orchestrator | Saturday 03 January 2026 00:55:33 +0000 (0:00:00.991) 0:09:12.533 ****** 2026-01-03 00:57:33.100128 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.100134 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.100146 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.100153 | orchestrator | 2026-01-03 00:57:33.100159 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-03 00:57:33.100165 | orchestrator | Saturday 03 January 2026 00:55:34 +0000 (0:00:00.687) 0:09:13.220 ****** 2026-01-03 00:57:33.100171 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.100177 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.100184 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.100190 | orchestrator | 2026-01-03 00:57:33.100196 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-03 00:57:33.100202 | orchestrator | Saturday 03 January 2026 00:55:34 +0000 (0:00:00.310) 0:09:13.531 ****** 2026-01-03 00:57:33.100208 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.100215 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.100221 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.100228 | orchestrator | 2026-01-03 00:57:33.100234 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-03 00:57:33.100241 | orchestrator | Saturday 03 January 2026 00:55:34 +0000 (0:00:00.325) 0:09:13.856 ****** 2026-01-03 00:57:33.100247 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.100254 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.100261 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.100267 | orchestrator | 2026-01-03 00:57:33.100273 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-03 00:57:33.100279 | orchestrator | Saturday 03 January 2026 00:55:35 +0000 (0:00:00.597) 0:09:14.454 ****** 2026-01-03 00:57:33.100285 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.100299 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.100306 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.100313 | orchestrator | 2026-01-03 00:57:33.100320 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-03 00:57:33.100327 | orchestrator | Saturday 03 January 2026 00:55:36 +0000 (0:00:00.696) 0:09:15.150 ****** 2026-01-03 00:57:33.100334 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.100340 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.100346 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.100352 | orchestrator | 2026-01-03 00:57:33.100358 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-03 00:57:33.100364 | orchestrator | Saturday 03 January 2026 00:55:36 +0000 (0:00:00.783) 0:09:15.934 ****** 2026-01-03 00:57:33.100371 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.100377 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.100384 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.100391 | orchestrator | 2026-01-03 00:57:33.100397 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-03 00:57:33.100413 | orchestrator | Saturday 03 January 2026 00:55:37 +0000 (0:00:00.314) 0:09:16.248 ****** 2026-01-03 00:57:33.100421 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.100427 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.100433 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.100439 | orchestrator | 2026-01-03 00:57:33.100445 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-03 00:57:33.100451 | orchestrator | Saturday 03 January 2026 00:55:37 +0000 (0:00:00.616) 0:09:16.865 ****** 2026-01-03 00:57:33.100458 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.100464 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.100471 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.100478 | orchestrator | 2026-01-03 00:57:33.100484 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-03 00:57:33.100491 | orchestrator | Saturday 03 January 2026 00:55:38 +0000 (0:00:00.356) 0:09:17.222 ****** 2026-01-03 00:57:33.100497 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.100503 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.100510 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.100516 | orchestrator | 2026-01-03 00:57:33.100522 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-03 00:57:33.100529 | orchestrator | Saturday 03 January 2026 00:55:38 +0000 (0:00:00.332) 0:09:17.555 ****** 2026-01-03 00:57:33.100536 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.100542 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.100549 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.100556 | orchestrator | 2026-01-03 00:57:33.100563 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-03 00:57:33.100569 | orchestrator | Saturday 03 January 2026 00:55:38 +0000 (0:00:00.347) 0:09:17.902 ****** 2026-01-03 00:57:33.100576 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.100582 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.100589 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.100595 | orchestrator | 2026-01-03 00:57:33.100602 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-03 00:57:33.100609 | orchestrator | Saturday 03 January 2026 00:55:39 +0000 (0:00:00.643) 0:09:18.545 ****** 2026-01-03 00:57:33.100615 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.100622 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.100628 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.100635 | orchestrator | 2026-01-03 00:57:33.100641 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-03 00:57:33.100647 | orchestrator | Saturday 03 January 2026 00:55:39 +0000 (0:00:00.328) 0:09:18.874 ****** 2026-01-03 00:57:33.100654 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.100660 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.100667 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.100681 | orchestrator | 2026-01-03 00:57:33.100687 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-03 00:57:33.100694 | orchestrator | Saturday 03 January 2026 00:55:40 +0000 (0:00:00.322) 0:09:19.196 ****** 2026-01-03 00:57:33.100701 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.100707 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.100714 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.100720 | orchestrator | 2026-01-03 00:57:33.100726 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-03 00:57:33.100733 | orchestrator | Saturday 03 January 2026 00:55:40 +0000 (0:00:00.320) 0:09:19.517 ****** 2026-01-03 00:57:33.100740 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.100746 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.100753 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.100759 | orchestrator | 2026-01-03 00:57:33.100770 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-01-03 00:57:33.100776 | orchestrator | Saturday 03 January 2026 00:55:41 +0000 (0:00:00.864) 0:09:20.382 ****** 2026-01-03 00:57:33.100783 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.100789 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.100795 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-01-03 00:57:33.100801 | orchestrator | 2026-01-03 00:57:33.100807 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-01-03 00:57:33.100813 | orchestrator | Saturday 03 January 2026 00:55:41 +0000 (0:00:00.388) 0:09:20.770 ****** 2026-01-03 00:57:33.100819 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-03 00:57:33.100825 | orchestrator | 2026-01-03 00:57:33.100831 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-01-03 00:57:33.100837 | orchestrator | Saturday 03 January 2026 00:55:44 +0000 (0:00:02.385) 0:09:23.156 ****** 2026-01-03 00:57:33.100845 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-01-03 00:57:33.100853 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.100859 | orchestrator | 2026-01-03 00:57:33.100865 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-01-03 00:57:33.100872 | orchestrator | Saturday 03 January 2026 00:55:44 +0000 (0:00:00.220) 0:09:23.376 ****** 2026-01-03 00:57:33.100879 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-03 00:57:33.100891 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-03 00:57:33.100898 | orchestrator | 2026-01-03 00:57:33.100912 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-01-03 00:57:33.100919 | orchestrator | Saturday 03 January 2026 00:55:52 +0000 (0:00:08.705) 0:09:32.081 ****** 2026-01-03 00:57:33.100925 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-03 00:57:33.100932 | orchestrator | 2026-01-03 00:57:33.100938 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-01-03 00:57:33.100944 | orchestrator | Saturday 03 January 2026 00:55:56 +0000 (0:00:03.402) 0:09:35.484 ****** 2026-01-03 00:57:33.100951 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.100958 | orchestrator | 2026-01-03 00:57:33.100964 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-01-03 00:57:33.100978 | orchestrator | Saturday 03 January 2026 00:55:56 +0000 (0:00:00.553) 0:09:36.038 ****** 2026-01-03 00:57:33.100984 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-03 00:57:33.100990 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-03 00:57:33.101016 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-03 00:57:33.101022 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-01-03 00:57:33.101028 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-01-03 00:57:33.101034 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-01-03 00:57:33.101040 | orchestrator | 2026-01-03 00:57:33.101046 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-01-03 00:57:33.101053 | orchestrator | Saturday 03 January 2026 00:55:57 +0000 (0:00:01.059) 0:09:37.097 ****** 2026-01-03 00:57:33.101059 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:57:33.101066 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-03 00:57:33.101072 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-03 00:57:33.101078 | orchestrator | 2026-01-03 00:57:33.101084 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-01-03 00:57:33.101090 | orchestrator | Saturday 03 January 2026 00:56:00 +0000 (0:00:02.297) 0:09:39.395 ****** 2026-01-03 00:57:33.101096 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-03 00:57:33.101103 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-03 00:57:33.101109 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.101115 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-03 00:57:33.101121 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-03 00:57:33.101127 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.101134 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-03 00:57:33.101140 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-03 00:57:33.101146 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.101153 | orchestrator | 2026-01-03 00:57:33.101159 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-01-03 00:57:33.101165 | orchestrator | Saturday 03 January 2026 00:56:01 +0000 (0:00:01.516) 0:09:40.911 ****** 2026-01-03 00:57:33.101172 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.101178 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.101184 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.101191 | orchestrator | 2026-01-03 00:57:33.101205 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-01-03 00:57:33.101212 | orchestrator | Saturday 03 January 2026 00:56:04 +0000 (0:00:02.740) 0:09:43.652 ****** 2026-01-03 00:57:33.101218 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.101224 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.101231 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.101237 | orchestrator | 2026-01-03 00:57:33.101244 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-01-03 00:57:33.101250 | orchestrator | Saturday 03 January 2026 00:56:04 +0000 (0:00:00.362) 0:09:44.015 ****** 2026-01-03 00:57:33.101256 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.101262 | orchestrator | 2026-01-03 00:57:33.101268 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-01-03 00:57:33.101274 | orchestrator | Saturday 03 January 2026 00:56:05 +0000 (0:00:00.849) 0:09:44.864 ****** 2026-01-03 00:57:33.101280 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.101287 | orchestrator | 2026-01-03 00:57:33.101293 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-01-03 00:57:33.101300 | orchestrator | Saturday 03 January 2026 00:56:06 +0000 (0:00:00.614) 0:09:45.478 ****** 2026-01-03 00:57:33.101313 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.101320 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.101326 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.101333 | orchestrator | 2026-01-03 00:57:33.101339 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-01-03 00:57:33.101345 | orchestrator | Saturday 03 January 2026 00:56:07 +0000 (0:00:01.210) 0:09:46.689 ****** 2026-01-03 00:57:33.101351 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.101357 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.101363 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.101370 | orchestrator | 2026-01-03 00:57:33.101376 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-01-03 00:57:33.101382 | orchestrator | Saturday 03 January 2026 00:56:09 +0000 (0:00:01.556) 0:09:48.245 ****** 2026-01-03 00:57:33.101388 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.101395 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.101401 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.101408 | orchestrator | 2026-01-03 00:57:33.101414 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-01-03 00:57:33.101428 | orchestrator | Saturday 03 January 2026 00:56:11 +0000 (0:00:02.310) 0:09:50.556 ****** 2026-01-03 00:57:33.101435 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.101441 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.101447 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.101453 | orchestrator | 2026-01-03 00:57:33.101459 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-01-03 00:57:33.101466 | orchestrator | Saturday 03 January 2026 00:56:13 +0000 (0:00:02.412) 0:09:52.968 ****** 2026-01-03 00:57:33.101472 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.101478 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.101485 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.101491 | orchestrator | 2026-01-03 00:57:33.101498 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-03 00:57:33.101504 | orchestrator | Saturday 03 January 2026 00:56:15 +0000 (0:00:01.907) 0:09:54.876 ****** 2026-01-03 00:57:33.101511 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.101517 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.101524 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.101530 | orchestrator | 2026-01-03 00:57:33.101537 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-03 00:57:33.101543 | orchestrator | Saturday 03 January 2026 00:56:16 +0000 (0:00:01.036) 0:09:55.912 ****** 2026-01-03 00:57:33.101550 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.101557 | orchestrator | 2026-01-03 00:57:33.101563 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-03 00:57:33.101569 | orchestrator | Saturday 03 January 2026 00:56:18 +0000 (0:00:01.286) 0:09:57.198 ****** 2026-01-03 00:57:33.101575 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.101582 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.101589 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.101596 | orchestrator | 2026-01-03 00:57:33.101602 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-03 00:57:33.101609 | orchestrator | Saturday 03 January 2026 00:56:18 +0000 (0:00:00.476) 0:09:57.675 ****** 2026-01-03 00:57:33.101615 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.101621 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.101627 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.101633 | orchestrator | 2026-01-03 00:57:33.101640 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-03 00:57:33.101646 | orchestrator | Saturday 03 January 2026 00:56:19 +0000 (0:00:01.348) 0:09:59.023 ****** 2026-01-03 00:57:33.101653 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:57:33.101668 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:57:33.101674 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:57:33.101681 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.101687 | orchestrator | 2026-01-03 00:57:33.101694 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-03 00:57:33.101700 | orchestrator | Saturday 03 January 2026 00:56:20 +0000 (0:00:00.979) 0:10:00.002 ****** 2026-01-03 00:57:33.101706 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.101712 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.101719 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.101725 | orchestrator | 2026-01-03 00:57:33.101732 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-03 00:57:33.101739 | orchestrator | 2026-01-03 00:57:33.101745 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-03 00:57:33.101756 | orchestrator | Saturday 03 January 2026 00:56:21 +0000 (0:00:00.861) 0:10:00.863 ****** 2026-01-03 00:57:33.101763 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.101770 | orchestrator | 2026-01-03 00:57:33.101777 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-03 00:57:33.101784 | orchestrator | Saturday 03 January 2026 00:56:22 +0000 (0:00:00.520) 0:10:01.384 ****** 2026-01-03 00:57:33.101791 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.101797 | orchestrator | 2026-01-03 00:57:33.101804 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-03 00:57:33.101810 | orchestrator | Saturday 03 January 2026 00:56:23 +0000 (0:00:00.985) 0:10:02.369 ****** 2026-01-03 00:57:33.101817 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.101824 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.101830 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.101836 | orchestrator | 2026-01-03 00:57:33.101843 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-03 00:57:33.101849 | orchestrator | Saturday 03 January 2026 00:56:23 +0000 (0:00:00.453) 0:10:02.823 ****** 2026-01-03 00:57:33.101855 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.101862 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.101869 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.101876 | orchestrator | 2026-01-03 00:57:33.101882 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-03 00:57:33.101889 | orchestrator | Saturday 03 January 2026 00:56:24 +0000 (0:00:00.760) 0:10:03.583 ****** 2026-01-03 00:57:33.101895 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.101901 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.101907 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.101914 | orchestrator | 2026-01-03 00:57:33.101921 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-03 00:57:33.101927 | orchestrator | Saturday 03 January 2026 00:56:25 +0000 (0:00:00.935) 0:10:04.519 ****** 2026-01-03 00:57:33.101934 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.101941 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.101947 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.101954 | orchestrator | 2026-01-03 00:57:33.101960 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-03 00:57:33.101967 | orchestrator | Saturday 03 January 2026 00:56:26 +0000 (0:00:00.723) 0:10:05.242 ****** 2026-01-03 00:57:33.101973 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.101985 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.101991 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.102052 | orchestrator | 2026-01-03 00:57:33.102063 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-03 00:57:33.102068 | orchestrator | Saturday 03 January 2026 00:56:26 +0000 (0:00:00.388) 0:10:05.631 ****** 2026-01-03 00:57:33.102081 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.102088 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.102095 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.102101 | orchestrator | 2026-01-03 00:57:33.102107 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-03 00:57:33.102113 | orchestrator | Saturday 03 January 2026 00:56:26 +0000 (0:00:00.299) 0:10:05.930 ****** 2026-01-03 00:57:33.102119 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.102125 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.102131 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.102137 | orchestrator | 2026-01-03 00:57:33.102143 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-03 00:57:33.102150 | orchestrator | Saturday 03 January 2026 00:56:27 +0000 (0:00:00.305) 0:10:06.236 ****** 2026-01-03 00:57:33.102157 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.102164 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.102170 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.102176 | orchestrator | 2026-01-03 00:57:33.102183 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-03 00:57:33.102189 | orchestrator | Saturday 03 January 2026 00:56:28 +0000 (0:00:01.116) 0:10:07.352 ****** 2026-01-03 00:57:33.102196 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.102202 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.102208 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.102214 | orchestrator | 2026-01-03 00:57:33.102220 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-03 00:57:33.102226 | orchestrator | Saturday 03 January 2026 00:56:28 +0000 (0:00:00.710) 0:10:08.063 ****** 2026-01-03 00:57:33.102233 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.102238 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.102244 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.102250 | orchestrator | 2026-01-03 00:57:33.102256 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-03 00:57:33.102263 | orchestrator | Saturday 03 January 2026 00:56:29 +0000 (0:00:00.317) 0:10:08.380 ****** 2026-01-03 00:57:33.102270 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.102277 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.102283 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.102289 | orchestrator | 2026-01-03 00:57:33.102295 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-03 00:57:33.102302 | orchestrator | Saturday 03 January 2026 00:56:29 +0000 (0:00:00.295) 0:10:08.676 ****** 2026-01-03 00:57:33.102308 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.102314 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.102320 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.102327 | orchestrator | 2026-01-03 00:57:33.102334 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-03 00:57:33.102340 | orchestrator | Saturday 03 January 2026 00:56:30 +0000 (0:00:00.625) 0:10:09.301 ****** 2026-01-03 00:57:33.102347 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.102354 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.102360 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.102367 | orchestrator | 2026-01-03 00:57:33.102374 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-03 00:57:33.102380 | orchestrator | Saturday 03 January 2026 00:56:30 +0000 (0:00:00.367) 0:10:09.668 ****** 2026-01-03 00:57:33.102393 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.102400 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.102406 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.102413 | orchestrator | 2026-01-03 00:57:33.102419 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-03 00:57:33.102426 | orchestrator | Saturday 03 January 2026 00:56:30 +0000 (0:00:00.327) 0:10:09.996 ****** 2026-01-03 00:57:33.102433 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.102446 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.102452 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.102457 | orchestrator | 2026-01-03 00:57:33.102463 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-03 00:57:33.102470 | orchestrator | Saturday 03 January 2026 00:56:31 +0000 (0:00:00.313) 0:10:10.309 ****** 2026-01-03 00:57:33.102476 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.102482 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.102488 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.102494 | orchestrator | 2026-01-03 00:57:33.102500 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-03 00:57:33.102507 | orchestrator | Saturday 03 January 2026 00:56:31 +0000 (0:00:00.660) 0:10:10.970 ****** 2026-01-03 00:57:33.102513 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.102519 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.102526 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.102532 | orchestrator | 2026-01-03 00:57:33.102539 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-03 00:57:33.102545 | orchestrator | Saturday 03 January 2026 00:56:32 +0000 (0:00:00.354) 0:10:11.324 ****** 2026-01-03 00:57:33.102551 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.102558 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.102564 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.102570 | orchestrator | 2026-01-03 00:57:33.102577 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-03 00:57:33.102583 | orchestrator | Saturday 03 January 2026 00:56:32 +0000 (0:00:00.328) 0:10:11.653 ****** 2026-01-03 00:57:33.102590 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.102596 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.102602 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.102609 | orchestrator | 2026-01-03 00:57:33.102616 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-03 00:57:33.102623 | orchestrator | Saturday 03 January 2026 00:56:33 +0000 (0:00:00.803) 0:10:12.456 ****** 2026-01-03 00:57:33.102630 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.102636 | orchestrator | 2026-01-03 00:57:33.102651 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-03 00:57:33.102658 | orchestrator | Saturday 03 January 2026 00:56:33 +0000 (0:00:00.540) 0:10:12.997 ****** 2026-01-03 00:57:33.102664 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:57:33.102671 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-03 00:57:33.102678 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-03 00:57:33.102685 | orchestrator | 2026-01-03 00:57:33.102692 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-03 00:57:33.102698 | orchestrator | Saturday 03 January 2026 00:56:36 +0000 (0:00:02.621) 0:10:15.618 ****** 2026-01-03 00:57:33.102705 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-03 00:57:33.102711 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-03 00:57:33.102718 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.102724 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-03 00:57:33.102730 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-03 00:57:33.102737 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.102743 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-03 00:57:33.102750 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-03 00:57:33.102757 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.102764 | orchestrator | 2026-01-03 00:57:33.102771 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-03 00:57:33.102777 | orchestrator | Saturday 03 January 2026 00:56:38 +0000 (0:00:01.568) 0:10:17.186 ****** 2026-01-03 00:57:33.102784 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.102790 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.102803 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.102810 | orchestrator | 2026-01-03 00:57:33.102817 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-03 00:57:33.102823 | orchestrator | Saturday 03 January 2026 00:56:38 +0000 (0:00:00.347) 0:10:17.534 ****** 2026-01-03 00:57:33.102830 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.102837 | orchestrator | 2026-01-03 00:57:33.102843 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-03 00:57:33.102850 | orchestrator | Saturday 03 January 2026 00:56:38 +0000 (0:00:00.519) 0:10:18.054 ****** 2026-01-03 00:57:33.102857 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-03 00:57:33.102865 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-03 00:57:33.102871 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-03 00:57:33.102877 | orchestrator | 2026-01-03 00:57:33.102884 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-03 00:57:33.102891 | orchestrator | Saturday 03 January 2026 00:56:39 +0000 (0:00:00.983) 0:10:19.037 ****** 2026-01-03 00:57:33.102909 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:57:33.102916 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-03 00:57:33.102923 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:57:33.102929 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-03 00:57:33.102935 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:57:33.102941 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-03 00:57:33.102948 | orchestrator | 2026-01-03 00:57:33.102955 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-03 00:57:33.102962 | orchestrator | Saturday 03 January 2026 00:56:43 +0000 (0:00:03.916) 0:10:22.953 ****** 2026-01-03 00:57:33.102969 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:57:33.102976 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-03 00:57:33.102982 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:57:33.102989 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-03 00:57:33.103010 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:57:33.103017 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-03 00:57:33.103023 | orchestrator | 2026-01-03 00:57:33.103029 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-03 00:57:33.103035 | orchestrator | Saturday 03 January 2026 00:56:46 +0000 (0:00:02.236) 0:10:25.190 ****** 2026-01-03 00:57:33.103042 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-03 00:57:33.103049 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.103055 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-03 00:57:33.103062 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.103068 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-03 00:57:33.103074 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.103080 | orchestrator | 2026-01-03 00:57:33.103094 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-03 00:57:33.103108 | orchestrator | Saturday 03 January 2026 00:56:47 +0000 (0:00:01.224) 0:10:26.414 ****** 2026-01-03 00:57:33.103114 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-01-03 00:57:33.103120 | orchestrator | 2026-01-03 00:57:33.103127 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-03 00:57:33.103134 | orchestrator | Saturday 03 January 2026 00:56:47 +0000 (0:00:00.240) 0:10:26.655 ****** 2026-01-03 00:57:33.103140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-03 00:57:33.103148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-03 00:57:33.103154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-03 00:57:33.103161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-03 00:57:33.103167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-03 00:57:33.103174 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.103180 | orchestrator | 2026-01-03 00:57:33.103186 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-03 00:57:33.103193 | orchestrator | Saturday 03 January 2026 00:56:48 +0000 (0:00:01.231) 0:10:27.886 ****** 2026-01-03 00:57:33.103200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-03 00:57:33.103206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-03 00:57:33.103212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-03 00:57:33.103219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-03 00:57:33.103226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-03 00:57:33.103232 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.103239 | orchestrator | 2026-01-03 00:57:33.103245 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-03 00:57:33.103251 | orchestrator | Saturday 03 January 2026 00:56:49 +0000 (0:00:00.648) 0:10:28.535 ****** 2026-01-03 00:57:33.103258 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-03 00:57:33.103270 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-03 00:57:33.103277 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-03 00:57:33.103284 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-03 00:57:33.103291 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-03 00:57:33.103297 | orchestrator | 2026-01-03 00:57:33.103304 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-03 00:57:33.103310 | orchestrator | Saturday 03 January 2026 00:57:18 +0000 (0:00:29.016) 0:10:57.552 ****** 2026-01-03 00:57:33.103322 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.103329 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.103336 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.103343 | orchestrator | 2026-01-03 00:57:33.103349 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-03 00:57:33.103356 | orchestrator | Saturday 03 January 2026 00:57:18 +0000 (0:00:00.313) 0:10:57.865 ****** 2026-01-03 00:57:33.103362 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.103369 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.103375 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.103381 | orchestrator | 2026-01-03 00:57:33.103387 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-03 00:57:33.103394 | orchestrator | Saturday 03 January 2026 00:57:19 +0000 (0:00:00.343) 0:10:58.209 ****** 2026-01-03 00:57:33.103401 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.103407 | orchestrator | 2026-01-03 00:57:33.103414 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-03 00:57:33.103421 | orchestrator | Saturday 03 January 2026 00:57:19 +0000 (0:00:00.868) 0:10:59.077 ****** 2026-01-03 00:57:33.103435 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.103443 | orchestrator | 2026-01-03 00:57:33.103449 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-03 00:57:33.103455 | orchestrator | Saturday 03 January 2026 00:57:20 +0000 (0:00:00.564) 0:10:59.641 ****** 2026-01-03 00:57:33.103462 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.103468 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.103475 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.103481 | orchestrator | 2026-01-03 00:57:33.103488 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-03 00:57:33.103495 | orchestrator | Saturday 03 January 2026 00:57:21 +0000 (0:00:01.209) 0:11:00.851 ****** 2026-01-03 00:57:33.103501 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.103507 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.103514 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.103520 | orchestrator | 2026-01-03 00:57:33.103526 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-03 00:57:33.103533 | orchestrator | Saturday 03 January 2026 00:57:23 +0000 (0:00:01.405) 0:11:02.256 ****** 2026-01-03 00:57:33.103539 | orchestrator | changed: [testbed-node-4] 2026-01-03 00:57:33.103546 | orchestrator | changed: [testbed-node-3] 2026-01-03 00:57:33.103553 | orchestrator | changed: [testbed-node-5] 2026-01-03 00:57:33.103560 | orchestrator | 2026-01-03 00:57:33.103566 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-03 00:57:33.103573 | orchestrator | Saturday 03 January 2026 00:57:24 +0000 (0:00:01.749) 0:11:04.006 ****** 2026-01-03 00:57:33.103579 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-03 00:57:33.103586 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-03 00:57:33.103592 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-03 00:57:33.103599 | orchestrator | 2026-01-03 00:57:33.103605 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-03 00:57:33.103612 | orchestrator | Saturday 03 January 2026 00:57:27 +0000 (0:00:02.579) 0:11:06.585 ****** 2026-01-03 00:57:33.103618 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.103625 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.103630 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.103636 | orchestrator | 2026-01-03 00:57:33.103643 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-03 00:57:33.103656 | orchestrator | Saturday 03 January 2026 00:57:27 +0000 (0:00:00.367) 0:11:06.953 ****** 2026-01-03 00:57:33.103663 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:57:33.103670 | orchestrator | 2026-01-03 00:57:33.103676 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-03 00:57:33.103681 | orchestrator | Saturday 03 January 2026 00:57:28 +0000 (0:00:00.552) 0:11:07.505 ****** 2026-01-03 00:57:33.103687 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.103693 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.103698 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.103705 | orchestrator | 2026-01-03 00:57:33.103711 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-03 00:57:33.103721 | orchestrator | Saturday 03 January 2026 00:57:29 +0000 (0:00:00.682) 0:11:08.188 ****** 2026-01-03 00:57:33.103728 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.103734 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:57:33.103740 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:57:33.103746 | orchestrator | 2026-01-03 00:57:33.103753 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-03 00:57:33.103759 | orchestrator | Saturday 03 January 2026 00:57:29 +0000 (0:00:00.366) 0:11:08.555 ****** 2026-01-03 00:57:33.103766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:57:33.103772 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:57:33.103779 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:57:33.103785 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:57:33.103792 | orchestrator | 2026-01-03 00:57:33.103798 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-03 00:57:33.103804 | orchestrator | Saturday 03 January 2026 00:57:30 +0000 (0:00:00.659) 0:11:09.214 ****** 2026-01-03 00:57:33.103810 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:57:33.103816 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:57:33.103822 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:57:33.103829 | orchestrator | 2026-01-03 00:57:33.103835 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:57:33.103842 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-01-03 00:57:33.103850 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-01-03 00:57:33.103857 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-01-03 00:57:33.103864 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-01-03 00:57:33.103870 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-01-03 00:57:33.103884 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-01-03 00:57:33.103892 | orchestrator | 2026-01-03 00:57:33.103899 | orchestrator | 2026-01-03 00:57:33.103905 | orchestrator | 2026-01-03 00:57:33.103912 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:57:33.103919 | orchestrator | Saturday 03 January 2026 00:57:30 +0000 (0:00:00.263) 0:11:09.478 ****** 2026-01-03 00:57:33.103925 | orchestrator | =============================================================================== 2026-01-03 00:57:33.103931 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 64.98s 2026-01-03 00:57:33.103938 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 44.99s 2026-01-03 00:57:33.103952 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 35.84s 2026-01-03 00:57:33.103958 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 29.02s 2026-01-03 00:57:33.103964 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.11s 2026-01-03 00:57:33.103970 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.66s 2026-01-03 00:57:33.103977 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.61s 2026-01-03 00:57:33.103983 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.28s 2026-01-03 00:57:33.103990 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.71s 2026-01-03 00:57:33.104032 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.47s 2026-01-03 00:57:33.104040 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.29s 2026-01-03 00:57:33.104046 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.56s 2026-01-03 00:57:33.104053 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.67s 2026-01-03 00:57:33.104059 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 3.92s 2026-01-03 00:57:33.104065 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.91s 2026-01-03 00:57:33.104071 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.84s 2026-01-03 00:57:33.104077 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.79s 2026-01-03 00:57:33.104083 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.67s 2026-01-03 00:57:33.104089 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.40s 2026-01-03 00:57:33.104095 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.07s 2026-01-03 00:57:36.134540 | orchestrator | 2026-01-03 00:57:36 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:57:36.135571 | orchestrator | 2026-01-03 00:57:36 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:57:36.137333 | orchestrator | 2026-01-03 00:57:36 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:57:36.137378 | orchestrator | 2026-01-03 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:39.185180 | orchestrator | 2026-01-03 00:57:39 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:57:39.187039 | orchestrator | 2026-01-03 00:57:39 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:57:39.189086 | orchestrator | 2026-01-03 00:57:39 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:57:39.189239 | orchestrator | 2026-01-03 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:42.272260 | orchestrator | 2026-01-03 00:57:42 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:57:42.273882 | orchestrator | 2026-01-03 00:57:42 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:57:42.275695 | orchestrator | 2026-01-03 00:57:42 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:57:42.275867 | orchestrator | 2026-01-03 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:45.317132 | orchestrator | 2026-01-03 00:57:45 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:57:45.320476 | orchestrator | 2026-01-03 00:57:45 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:57:45.321737 | orchestrator | 2026-01-03 00:57:45 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:57:45.321803 | orchestrator | 2026-01-03 00:57:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:48.374436 | orchestrator | 2026-01-03 00:57:48 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:57:48.375155 | orchestrator | 2026-01-03 00:57:48 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:57:48.376035 | orchestrator | 2026-01-03 00:57:48 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:57:48.376103 | orchestrator | 2026-01-03 00:57:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:51.429568 | orchestrator | 2026-01-03 00:57:51 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:57:51.431251 | orchestrator | 2026-01-03 00:57:51 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:57:51.433973 | orchestrator | 2026-01-03 00:57:51 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:57:51.434197 | orchestrator | 2026-01-03 00:57:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:54.478563 | orchestrator | 2026-01-03 00:57:54 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:57:54.479538 | orchestrator | 2026-01-03 00:57:54 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:57:54.482476 | orchestrator | 2026-01-03 00:57:54 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:57:54.483190 | orchestrator | 2026-01-03 00:57:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:57:57.530674 | orchestrator | 2026-01-03 00:57:57 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:57:57.530733 | orchestrator | 2026-01-03 00:57:57 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:57:57.531106 | orchestrator | 2026-01-03 00:57:57 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:57:57.531130 | orchestrator | 2026-01-03 00:57:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:00.572771 | orchestrator | 2026-01-03 00:58:00 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:58:00.574724 | orchestrator | 2026-01-03 00:58:00 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:58:00.576848 | orchestrator | 2026-01-03 00:58:00 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:58:00.576910 | orchestrator | 2026-01-03 00:58:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:03.627775 | orchestrator | 2026-01-03 00:58:03 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:58:03.628795 | orchestrator | 2026-01-03 00:58:03 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:58:03.629981 | orchestrator | 2026-01-03 00:58:03 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:58:03.630092 | orchestrator | 2026-01-03 00:58:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:06.687537 | orchestrator | 2026-01-03 00:58:06 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:58:06.689623 | orchestrator | 2026-01-03 00:58:06 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:58:06.691404 | orchestrator | 2026-01-03 00:58:06 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:58:06.692144 | orchestrator | 2026-01-03 00:58:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:09.737572 | orchestrator | 2026-01-03 00:58:09 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state STARTED 2026-01-03 00:58:09.738959 | orchestrator | 2026-01-03 00:58:09 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:58:09.740502 | orchestrator | 2026-01-03 00:58:09 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:58:09.740548 | orchestrator | 2026-01-03 00:58:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:12.795858 | orchestrator | 2026-01-03 00:58:12.795932 | orchestrator | 2026-01-03 00:58:12 | INFO  | Task adbdb875-92b0-4911-84d3-0e856fc43ea1 is in state SUCCESS 2026-01-03 00:58:12.797560 | orchestrator | 2026-01-03 00:58:12.797591 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:58:12.797597 | orchestrator | 2026-01-03 00:58:12.797603 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 00:58:12.797608 | orchestrator | Saturday 03 January 2026 00:55:44 +0000 (0:00:00.253) 0:00:00.253 ****** 2026-01-03 00:58:12.797613 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:58:12.797618 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:58:12.797623 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:58:12.797628 | orchestrator | 2026-01-03 00:58:12.797632 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 00:58:12.797637 | orchestrator | Saturday 03 January 2026 00:55:45 +0000 (0:00:00.304) 0:00:00.558 ****** 2026-01-03 00:58:12.797643 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-03 00:58:12.797647 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-01-03 00:58:12.797652 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-01-03 00:58:12.797656 | orchestrator | 2026-01-03 00:58:12.797660 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-03 00:58:12.797665 | orchestrator | 2026-01-03 00:58:12.797669 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-03 00:58:12.797674 | orchestrator | Saturday 03 January 2026 00:55:45 +0000 (0:00:00.433) 0:00:00.992 ****** 2026-01-03 00:58:12.797678 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:58:12.797683 | orchestrator | 2026-01-03 00:58:12.797687 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-01-03 00:58:12.797691 | orchestrator | Saturday 03 January 2026 00:55:46 +0000 (0:00:00.505) 0:00:01.497 ****** 2026-01-03 00:58:12.797696 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-03 00:58:12.797700 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-03 00:58:12.797705 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-03 00:58:12.797709 | orchestrator | 2026-01-03 00:58:12.797713 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-01-03 00:58:12.797718 | orchestrator | Saturday 03 January 2026 00:55:46 +0000 (0:00:00.758) 0:00:02.256 ****** 2026-01-03 00:58:12.797725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-03 00:58:12.797760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-03 00:58:12.797774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-03 00:58:12.797781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-03 00:58:12.797786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-03 00:58:12.797795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-03 00:58:12.797804 | orchestrator | 2026-01-03 00:58:12.797809 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-03 00:58:12.797813 | orchestrator | Saturday 03 January 2026 00:55:48 +0000 (0:00:01.706) 0:00:03.963 ****** 2026-01-03 00:58:12.797818 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:58:12.797822 | orchestrator | 2026-01-03 00:58:12.797826 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-01-03 00:58:12.797831 | orchestrator | Saturday 03 January 2026 00:55:49 +0000 (0:00:00.528) 0:00:04.492 ****** 2026-01-03 00:58:12.797841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-03 00:58:12.797846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-03 00:58:12.797851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-03 00:58:12.797863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-03 00:58:12.797871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-03 00:58:12.797942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-03 00:58:12.797949 | orchestrator | 2026-01-03 00:58:12.797954 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-01-03 00:58:12.797959 | orchestrator | Saturday 03 January 2026 00:55:51 +0000 (0:00:02.676) 0:00:07.168 ****** 2026-01-03 00:58:12.797963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-03 00:58:12.797980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-03 00:58:12.797986 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:12.797990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-03 00:58:12.797999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-03 00:58:12.798004 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:12.798009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-03 00:58:12.798053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-03 00:58:12.798061 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:12.798066 | orchestrator | 2026-01-03 00:58:12.798070 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-01-03 00:58:12.798075 | orchestrator | Saturday 03 January 2026 00:55:53 +0000 (0:00:01.193) 0:00:08.362 ****** 2026-01-03 00:58:12.798079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-03 00:58:12.798093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-03 00:58:12.798101 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:12.798108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-03 00:58:12.798120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-03 00:58:12.798128 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:12.798210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-03 00:58:12.798228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-03 00:58:12.798236 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:12.798243 | orchestrator | 2026-01-03 00:58:12.798250 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-01-03 00:58:12.798258 | orchestrator | Saturday 03 January 2026 00:55:53 +0000 (0:00:00.909) 0:00:09.271 ****** 2026-01-03 00:58:12.798266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-03 00:58:12.798280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-03 00:58:12.798289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-03 00:58:12.798298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-03 00:58:12.798304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-03 00:58:12.798313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-03 00:58:12.798317 | orchestrator | 2026-01-03 00:58:12.798322 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-01-03 00:58:12.798326 | orchestrator | Saturday 03 January 2026 00:55:56 +0000 (0:00:02.190) 0:00:11.462 ****** 2026-01-03 00:58:12.798331 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:58:12.798335 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:58:12.798340 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:58:12.798344 | orchestrator | 2026-01-03 00:58:12.798349 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-01-03 00:58:12.798353 | orchestrator | Saturday 03 January 2026 00:55:59 +0000 (0:00:02.897) 0:00:14.359 ****** 2026-01-03 00:58:12.798357 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:58:12.798365 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:58:12.798369 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:58:12.798373 | orchestrator | 2026-01-03 00:58:12.798449 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-01-03 00:58:12.798454 | orchestrator | Saturday 03 January 2026 00:56:01 +0000 (0:00:02.291) 0:00:16.653 ****** 2026-01-03 00:58:12.798459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-03 00:58:12.798468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-03 00:58:12.798477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-03 00:58:12.798482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-03 00:58:12.798491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-03 00:58:12.798500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-03 00:58:12.798509 | orchestrator | 2026-01-03 00:58:12.798513 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-03 00:58:12.798518 | orchestrator | Saturday 03 January 2026 00:56:03 +0000 (0:00:02.215) 0:00:18.869 ****** 2026-01-03 00:58:12.798522 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:12.798527 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:12.798531 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:12.798536 | orchestrator | 2026-01-03 00:58:12.798540 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-03 00:58:12.798545 | orchestrator | Saturday 03 January 2026 00:56:03 +0000 (0:00:00.295) 0:00:19.164 ****** 2026-01-03 00:58:12.798549 | orchestrator | 2026-01-03 00:58:12.798554 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-03 00:58:12.798558 | orchestrator | Saturday 03 January 2026 00:56:03 +0000 (0:00:00.071) 0:00:19.235 ****** 2026-01-03 00:58:12.798562 | orchestrator | 2026-01-03 00:58:12.798567 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-03 00:58:12.798571 | orchestrator | Saturday 03 January 2026 00:56:04 +0000 (0:00:00.084) 0:00:19.320 ****** 2026-01-03 00:58:12.798576 | orchestrator | 2026-01-03 00:58:12.798580 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-01-03 00:58:12.798584 | orchestrator | Saturday 03 January 2026 00:56:04 +0000 (0:00:00.070) 0:00:19.390 ****** 2026-01-03 00:58:12.798589 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:12.798593 | orchestrator | 2026-01-03 00:58:12.798598 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-01-03 00:58:12.798602 | orchestrator | Saturday 03 January 2026 00:56:04 +0000 (0:00:00.209) 0:00:19.599 ****** 2026-01-03 00:58:12.798607 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:12.798611 | orchestrator | 2026-01-03 00:58:12.798616 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-01-03 00:58:12.798620 | orchestrator | Saturday 03 January 2026 00:56:05 +0000 (0:00:00.962) 0:00:20.562 ****** 2026-01-03 00:58:12.798625 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:58:12.798629 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:58:12.798633 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:58:12.798638 | orchestrator | 2026-01-03 00:58:12.798642 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-01-03 00:58:12.798647 | orchestrator | Saturday 03 January 2026 00:56:53 +0000 (0:00:48.715) 0:01:09.277 ****** 2026-01-03 00:58:12.798651 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:58:12.798655 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:58:12.798660 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:58:12.798664 | orchestrator | 2026-01-03 00:58:12.798669 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-03 00:58:12.798673 | orchestrator | Saturday 03 January 2026 00:57:59 +0000 (0:01:05.696) 0:02:14.974 ****** 2026-01-03 00:58:12.798678 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:58:12.798682 | orchestrator | 2026-01-03 00:58:12.798687 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-01-03 00:58:12.798691 | orchestrator | Saturday 03 January 2026 00:58:00 +0000 (0:00:00.799) 0:02:15.773 ****** 2026-01-03 00:58:12.798696 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:58:12.798700 | orchestrator | 2026-01-03 00:58:12.798705 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-01-03 00:58:12.798712 | orchestrator | Saturday 03 January 2026 00:58:03 +0000 (0:00:02.631) 0:02:18.404 ****** 2026-01-03 00:58:12.798716 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:58:12.798724 | orchestrator | 2026-01-03 00:58:12.798728 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-01-03 00:58:12.798733 | orchestrator | Saturday 03 January 2026 00:58:05 +0000 (0:00:02.636) 0:02:21.041 ****** 2026-01-03 00:58:12.798737 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:58:12.798742 | orchestrator | 2026-01-03 00:58:12.798746 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-01-03 00:58:12.798750 | orchestrator | Saturday 03 January 2026 00:58:08 +0000 (0:00:03.220) 0:02:24.262 ****** 2026-01-03 00:58:12.798755 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:58:12.798759 | orchestrator | 2026-01-03 00:58:12.798764 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:58:12.798769 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-03 00:58:12.798776 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-03 00:58:12.798780 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-03 00:58:12.798785 | orchestrator | 2026-01-03 00:58:12.798789 | orchestrator | 2026-01-03 00:58:12.798794 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:58:12.798802 | orchestrator | Saturday 03 January 2026 00:58:12 +0000 (0:00:03.046) 0:02:27.309 ****** 2026-01-03 00:58:12.798806 | orchestrator | =============================================================================== 2026-01-03 00:58:12.798811 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 65.70s 2026-01-03 00:58:12.798815 | orchestrator | opensearch : Restart opensearch container ------------------------------ 48.72s 2026-01-03 00:58:12.798820 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.22s 2026-01-03 00:58:12.798824 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.05s 2026-01-03 00:58:12.798828 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.90s 2026-01-03 00:58:12.798833 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.68s 2026-01-03 00:58:12.798837 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.64s 2026-01-03 00:58:12.798842 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.63s 2026-01-03 00:58:12.798846 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.29s 2026-01-03 00:58:12.798850 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.22s 2026-01-03 00:58:12.798855 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.19s 2026-01-03 00:58:12.798859 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.71s 2026-01-03 00:58:12.798864 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.19s 2026-01-03 00:58:12.798868 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.96s 2026-01-03 00:58:12.798873 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.91s 2026-01-03 00:58:12.798877 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.80s 2026-01-03 00:58:12.798881 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.76s 2026-01-03 00:58:12.798886 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2026-01-03 00:58:12.798890 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2026-01-03 00:58:12.798895 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2026-01-03 00:58:12.798899 | orchestrator | 2026-01-03 00:58:12 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:58:12.800512 | orchestrator | 2026-01-03 00:58:12 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:58:12.800541 | orchestrator | 2026-01-03 00:58:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:15.852225 | orchestrator | 2026-01-03 00:58:15 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:58:15.854867 | orchestrator | 2026-01-03 00:58:15 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:58:15.854958 | orchestrator | 2026-01-03 00:58:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:18.907496 | orchestrator | 2026-01-03 00:58:18 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:58:18.912705 | orchestrator | 2026-01-03 00:58:18 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:58:18.912783 | orchestrator | 2026-01-03 00:58:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:21.958876 | orchestrator | 2026-01-03 00:58:21 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:58:21.960575 | orchestrator | 2026-01-03 00:58:21 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:58:21.960688 | orchestrator | 2026-01-03 00:58:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:25.015002 | orchestrator | 2026-01-03 00:58:25 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:58:25.016494 | orchestrator | 2026-01-03 00:58:25 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:58:25.018400 | orchestrator | 2026-01-03 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:28.066367 | orchestrator | 2026-01-03 00:58:28 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:58:28.069592 | orchestrator | 2026-01-03 00:58:28 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:58:28.069675 | orchestrator | 2026-01-03 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:31.117044 | orchestrator | 2026-01-03 00:58:31 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:58:31.118850 | orchestrator | 2026-01-03 00:58:31 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:58:31.119082 | orchestrator | 2026-01-03 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:34.162153 | orchestrator | 2026-01-03 00:58:34 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:58:34.164277 | orchestrator | 2026-01-03 00:58:34 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:58:34.164540 | orchestrator | 2026-01-03 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:37.220027 | orchestrator | 2026-01-03 00:58:37 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:58:37.221663 | orchestrator | 2026-01-03 00:58:37 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:58:37.221710 | orchestrator | 2026-01-03 00:58:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:40.266800 | orchestrator | 2026-01-03 00:58:40 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:58:40.269410 | orchestrator | 2026-01-03 00:58:40 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:58:40.269498 | orchestrator | 2026-01-03 00:58:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:43.311684 | orchestrator | 2026-01-03 00:58:43 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:58:43.313863 | orchestrator | 2026-01-03 00:58:43 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:58:43.314048 | orchestrator | 2026-01-03 00:58:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:46.366804 | orchestrator | 2026-01-03 00:58:46 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:58:46.367496 | orchestrator | 2026-01-03 00:58:46 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:58:46.367530 | orchestrator | 2026-01-03 00:58:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:49.424302 | orchestrator | 2026-01-03 00:58:49 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state STARTED 2026-01-03 00:58:49.426232 | orchestrator | 2026-01-03 00:58:49 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:58:49.426322 | orchestrator | 2026-01-03 00:58:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:52.485920 | orchestrator | 2026-01-03 00:58:52 | INFO  | Task b1ff1038-3509-4436-84d4-a80b0ff46d74 is in state STARTED 2026-01-03 00:58:52.490577 | orchestrator | 2026-01-03 00:58:52 | INFO  | Task a453336a-8472-46e9-bd1c-5eb713589fd8 is in state SUCCESS 2026-01-03 00:58:52.492820 | orchestrator | 2026-01-03 00:58:52.492881 | orchestrator | 2026-01-03 00:58:52.492891 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-01-03 00:58:52.492901 | orchestrator | 2026-01-03 00:58:52.492908 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-03 00:58:52.492917 | orchestrator | Saturday 03 January 2026 00:55:44 +0000 (0:00:00.088) 0:00:00.088 ****** 2026-01-03 00:58:52.492925 | orchestrator | ok: [localhost] => { 2026-01-03 00:58:52.492935 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-01-03 00:58:52.492943 | orchestrator | } 2026-01-03 00:58:52.492951 | orchestrator | 2026-01-03 00:58:52.492960 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-01-03 00:58:52.492968 | orchestrator | Saturday 03 January 2026 00:55:44 +0000 (0:00:00.054) 0:00:00.142 ****** 2026-01-03 00:58:52.492993 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-01-03 00:58:52.493001 | orchestrator | ...ignoring 2026-01-03 00:58:52.493009 | orchestrator | 2026-01-03 00:58:52.493016 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-01-03 00:58:52.493023 | orchestrator | Saturday 03 January 2026 00:55:47 +0000 (0:00:02.848) 0:00:02.991 ****** 2026-01-03 00:58:52.493030 | orchestrator | skipping: [localhost] 2026-01-03 00:58:52.493037 | orchestrator | 2026-01-03 00:58:52.493043 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-01-03 00:58:52.493050 | orchestrator | Saturday 03 January 2026 00:55:47 +0000 (0:00:00.055) 0:00:03.047 ****** 2026-01-03 00:58:52.493056 | orchestrator | ok: [localhost] 2026-01-03 00:58:52.493063 | orchestrator | 2026-01-03 00:58:52.493070 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:58:52.493078 | orchestrator | 2026-01-03 00:58:52.493085 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 00:58:52.493092 | orchestrator | Saturday 03 January 2026 00:55:47 +0000 (0:00:00.150) 0:00:03.198 ****** 2026-01-03 00:58:52.493106 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:58:52.493114 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:58:52.493121 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:58:52.493128 | orchestrator | 2026-01-03 00:58:52.493135 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 00:58:52.493143 | orchestrator | Saturday 03 January 2026 00:55:47 +0000 (0:00:00.302) 0:00:03.500 ****** 2026-01-03 00:58:52.493169 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-03 00:58:52.493176 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-03 00:58:52.493182 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-03 00:58:52.493189 | orchestrator | 2026-01-03 00:58:52.493195 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-03 00:58:52.493201 | orchestrator | 2026-01-03 00:58:52.493208 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-03 00:58:52.493214 | orchestrator | Saturday 03 January 2026 00:55:48 +0000 (0:00:00.571) 0:00:04.071 ****** 2026-01-03 00:58:52.493232 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-03 00:58:52.493239 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-03 00:58:52.493246 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-03 00:58:52.493254 | orchestrator | 2026-01-03 00:58:52.493322 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-03 00:58:52.493335 | orchestrator | Saturday 03 January 2026 00:55:48 +0000 (0:00:00.374) 0:00:04.446 ****** 2026-01-03 00:58:52.493343 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:58:52.493352 | orchestrator | 2026-01-03 00:58:52.493360 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-01-03 00:58:52.493368 | orchestrator | Saturday 03 January 2026 00:55:49 +0000 (0:00:00.534) 0:00:04.981 ****** 2026-01-03 00:58:52.493397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-03 00:58:52.493418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-03 00:58:52.493437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-03 00:58:52.493446 | orchestrator | 2026-01-03 00:58:52.493462 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-01-03 00:58:52.493470 | orchestrator | Saturday 03 January 2026 00:55:52 +0000 (0:00:02.794) 0:00:07.775 ****** 2026-01-03 00:58:52.493483 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:52.493492 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:58:52.493499 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:52.493507 | orchestrator | 2026-01-03 00:58:52.493514 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-01-03 00:58:52.493521 | orchestrator | Saturday 03 January 2026 00:55:52 +0000 (0:00:00.719) 0:00:08.495 ****** 2026-01-03 00:58:52.493529 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:52.493537 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:52.493544 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:58:52.493552 | orchestrator | 2026-01-03 00:58:52.493560 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-01-03 00:58:52.493574 | orchestrator | Saturday 03 January 2026 00:55:54 +0000 (0:00:01.545) 0:00:10.040 ****** 2026-01-03 00:58:52.493588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-03 00:58:52.493604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-03 00:58:52.493617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-03 00:58:52.493630 | orchestrator | 2026-01-03 00:58:52.493638 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-01-03 00:58:52.493646 | orchestrator | Saturday 03 January 2026 00:55:58 +0000 (0:00:03.849) 0:00:13.889 ****** 2026-01-03 00:58:52.493654 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:52.493662 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:52.493669 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:58:52.493677 | orchestrator | 2026-01-03 00:58:52.493684 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-01-03 00:58:52.493691 | orchestrator | Saturday 03 January 2026 00:55:59 +0000 (0:00:01.225) 0:00:15.115 ****** 2026-01-03 00:58:52.493699 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:58:52.493706 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:58:52.493713 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:58:52.493721 | orchestrator | 2026-01-03 00:58:52.493728 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-03 00:58:52.493736 | orchestrator | Saturday 03 January 2026 00:56:03 +0000 (0:00:04.393) 0:00:19.508 ****** 2026-01-03 00:58:52.493743 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:58:52.493751 | orchestrator | 2026-01-03 00:58:52.493758 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-03 00:58:52.493766 | orchestrator | Saturday 03 January 2026 00:56:04 +0000 (0:00:00.517) 0:00:20.026 ****** 2026-01-03 00:58:52.493785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:58:52.493800 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:52.493808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:58:52.493816 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:52.493831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:58:52.493844 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:52.493852 | orchestrator | 2026-01-03 00:58:52.493860 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-03 00:58:52.493868 | orchestrator | Saturday 03 January 2026 00:56:08 +0000 (0:00:04.383) 0:00:24.409 ****** 2026-01-03 00:58:52.493880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:58:52.493891 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:52.493905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:58:52.493919 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:52.493930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:58:52.493937 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:52.493944 | orchestrator | 2026-01-03 00:58:52.493951 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-03 00:58:52.493958 | orchestrator | Saturday 03 January 2026 00:56:12 +0000 (0:00:03.320) 0:00:27.730 ****** 2026-01-03 00:58:52.493966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:58:52.493984 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:52.494002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:58:52.494010 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:52.494065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-03 00:58:52.494080 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:52.494088 | orchestrator | 2026-01-03 00:58:52.494096 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-01-03 00:58:52.494103 | orchestrator | Saturday 03 January 2026 00:56:15 +0000 (0:00:03.258) 0:00:30.988 ****** 2026-01-03 00:58:52.494124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-03 00:58:52.494135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-03 00:58:52.494161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-03 00:58:52.494171 | orchestrator | 2026-01-03 00:58:52.494179 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-01-03 00:58:52.494187 | orchestrator | Saturday 03 January 2026 00:56:19 +0000 (0:00:03.682) 0:00:34.671 ****** 2026-01-03 00:58:52.494195 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:58:52.494202 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:58:52.494209 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:58:52.494216 | orchestrator | 2026-01-03 00:58:52.494224 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-01-03 00:58:52.494231 | orchestrator | Saturday 03 January 2026 00:56:20 +0000 (0:00:00.936) 0:00:35.608 ****** 2026-01-03 00:58:52.494239 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:58:52.494247 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:58:52.494254 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:58:52.494281 | orchestrator | 2026-01-03 00:58:52.494289 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-01-03 00:58:52.494298 | orchestrator | Saturday 03 January 2026 00:56:20 +0000 (0:00:00.563) 0:00:36.171 ****** 2026-01-03 00:58:52.494305 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:58:52.494313 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:58:52.494320 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:58:52.494328 | orchestrator | 2026-01-03 00:58:52.494336 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-01-03 00:58:52.494343 | orchestrator | Saturday 03 January 2026 00:56:21 +0000 (0:00:00.468) 0:00:36.640 ****** 2026-01-03 00:58:52.494351 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-01-03 00:58:52.494359 | orchestrator | ...ignoring 2026-01-03 00:58:52.494364 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-01-03 00:58:52.494369 | orchestrator | ...ignoring 2026-01-03 00:58:52.494374 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-01-03 00:58:52.494384 | orchestrator | ...ignoring 2026-01-03 00:58:52.494389 | orchestrator | 2026-01-03 00:58:52.494394 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-01-03 00:58:52.494398 | orchestrator | Saturday 03 January 2026 00:56:32 +0000 (0:00:10.963) 0:00:47.604 ****** 2026-01-03 00:58:52.494403 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:58:52.494408 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:58:52.494412 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:58:52.494417 | orchestrator | 2026-01-03 00:58:52.494422 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-01-03 00:58:52.494427 | orchestrator | Saturday 03 January 2026 00:56:32 +0000 (0:00:00.445) 0:00:48.050 ****** 2026-01-03 00:58:52.494431 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:52.494436 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:52.494441 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:52.494445 | orchestrator | 2026-01-03 00:58:52.494450 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-01-03 00:58:52.494457 | orchestrator | Saturday 03 January 2026 00:56:33 +0000 (0:00:00.784) 0:00:48.835 ****** 2026-01-03 00:58:52.494464 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:52.494471 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:52.494478 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:52.494486 | orchestrator | 2026-01-03 00:58:52.494493 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-01-03 00:58:52.494501 | orchestrator | Saturday 03 January 2026 00:56:33 +0000 (0:00:00.446) 0:00:49.281 ****** 2026-01-03 00:58:52.494508 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:52.494515 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:52.494524 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:52.494534 | orchestrator | 2026-01-03 00:58:52.494542 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-01-03 00:58:52.494549 | orchestrator | Saturday 03 January 2026 00:56:34 +0000 (0:00:00.456) 0:00:49.738 ****** 2026-01-03 00:58:52.494557 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:58:52.494564 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:58:52.494571 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:58:52.494579 | orchestrator | 2026-01-03 00:58:52.494587 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-01-03 00:58:52.494595 | orchestrator | Saturday 03 January 2026 00:56:34 +0000 (0:00:00.447) 0:00:50.186 ****** 2026-01-03 00:58:52.494609 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:52.494617 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:52.494625 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:52.494633 | orchestrator | 2026-01-03 00:58:52.494640 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-03 00:58:52.494648 | orchestrator | Saturday 03 January 2026 00:56:35 +0000 (0:00:00.679) 0:00:50.865 ****** 2026-01-03 00:58:52.494656 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:52.494664 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:52.494671 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-01-03 00:58:52.494679 | orchestrator | 2026-01-03 00:58:52.494686 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-01-03 00:58:52.494693 | orchestrator | Saturday 03 January 2026 00:56:35 +0000 (0:00:00.421) 0:00:51.287 ****** 2026-01-03 00:58:52.494700 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:58:52.494708 | orchestrator | 2026-01-03 00:58:52.494722 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-01-03 00:58:52.494730 | orchestrator | Saturday 03 January 2026 00:56:45 +0000 (0:00:10.086) 0:01:01.373 ****** 2026-01-03 00:58:52.494738 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:58:52.494745 | orchestrator | 2026-01-03 00:58:52.494753 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-03 00:58:52.494767 | orchestrator | Saturday 03 January 2026 00:56:46 +0000 (0:00:00.165) 0:01:01.539 ****** 2026-01-03 00:58:52.494775 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:52.494782 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:52.494790 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:52.494797 | orchestrator | 2026-01-03 00:58:52.494804 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-01-03 00:58:52.494812 | orchestrator | Saturday 03 January 2026 00:56:46 +0000 (0:00:00.982) 0:01:02.522 ****** 2026-01-03 00:58:52.494819 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:58:52.494826 | orchestrator | 2026-01-03 00:58:52.494834 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-01-03 00:58:52.494841 | orchestrator | Saturday 03 January 2026 00:56:54 +0000 (0:00:07.910) 0:01:10.432 ****** 2026-01-03 00:58:52.494849 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2026-01-03 00:58:52.494858 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:58:52.494871 | orchestrator | 2026-01-03 00:58:52.494879 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-01-03 00:58:52.494886 | orchestrator | Saturday 03 January 2026 00:57:02 +0000 (0:00:07.287) 0:01:17.720 ****** 2026-01-03 00:58:52.494893 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:58:52.494901 | orchestrator | 2026-01-03 00:58:52.494908 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-01-03 00:58:52.494916 | orchestrator | Saturday 03 January 2026 00:57:04 +0000 (0:00:02.663) 0:01:20.384 ****** 2026-01-03 00:58:52.494922 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:58:52.494929 | orchestrator | 2026-01-03 00:58:52.494935 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-01-03 00:58:52.494942 | orchestrator | Saturday 03 January 2026 00:57:04 +0000 (0:00:00.134) 0:01:20.518 ****** 2026-01-03 00:58:52.494949 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:52.494956 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:52.494961 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:52.494968 | orchestrator | 2026-01-03 00:58:52.494976 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-01-03 00:58:52.494983 | orchestrator | Saturday 03 January 2026 00:57:05 +0000 (0:00:00.343) 0:01:20.861 ****** 2026-01-03 00:58:52.494990 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:52.494998 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-03 00:58:52.495005 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:58:52.495012 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:58:52.495019 | orchestrator | 2026-01-03 00:58:52.495028 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-03 00:58:52.495033 | orchestrator | skipping: no hosts matched 2026-01-03 00:58:52.495038 | orchestrator | 2026-01-03 00:58:52.495042 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-03 00:58:52.495050 | orchestrator | 2026-01-03 00:58:52.495056 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-03 00:58:52.495063 | orchestrator | Saturday 03 January 2026 00:57:05 +0000 (0:00:00.594) 0:01:21.456 ****** 2026-01-03 00:58:52.495070 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:58:52.495077 | orchestrator | 2026-01-03 00:58:52.495084 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-03 00:58:52.495091 | orchestrator | Saturday 03 January 2026 00:57:23 +0000 (0:00:17.139) 0:01:38.595 ****** 2026-01-03 00:58:52.495100 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:58:52.495108 | orchestrator | 2026-01-03 00:58:52.495114 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-03 00:58:52.495122 | orchestrator | Saturday 03 January 2026 00:57:38 +0000 (0:00:15.567) 0:01:54.163 ****** 2026-01-03 00:58:52.495129 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:58:52.495136 | orchestrator | 2026-01-03 00:58:52.495152 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-03 00:58:52.495159 | orchestrator | 2026-01-03 00:58:52.495166 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-03 00:58:52.495174 | orchestrator | Saturday 03 January 2026 00:57:41 +0000 (0:00:02.563) 0:01:56.727 ****** 2026-01-03 00:58:52.495180 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:58:52.495187 | orchestrator | 2026-01-03 00:58:52.495194 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-03 00:58:52.495200 | orchestrator | Saturday 03 January 2026 00:57:59 +0000 (0:00:18.495) 0:02:15.222 ****** 2026-01-03 00:58:52.495207 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:58:52.495213 | orchestrator | 2026-01-03 00:58:52.495220 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-03 00:58:52.495227 | orchestrator | Saturday 03 January 2026 00:58:15 +0000 (0:00:15.631) 0:02:30.854 ****** 2026-01-03 00:58:52.495243 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:58:52.495251 | orchestrator | 2026-01-03 00:58:52.495258 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-03 00:58:52.495289 | orchestrator | 2026-01-03 00:58:52.495297 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-03 00:58:52.495304 | orchestrator | Saturday 03 January 2026 00:58:17 +0000 (0:00:02.493) 0:02:33.347 ****** 2026-01-03 00:58:52.495310 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:58:52.495318 | orchestrator | 2026-01-03 00:58:52.495325 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-03 00:58:52.495332 | orchestrator | Saturday 03 January 2026 00:58:30 +0000 (0:00:12.192) 0:02:45.540 ****** 2026-01-03 00:58:52.495338 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:58:52.495344 | orchestrator | 2026-01-03 00:58:52.495352 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-03 00:58:52.495362 | orchestrator | Saturday 03 January 2026 00:58:33 +0000 (0:00:03.781) 0:02:49.321 ****** 2026-01-03 00:58:52.495374 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:58:52.495387 | orchestrator | 2026-01-03 00:58:52.495411 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-03 00:58:52.495421 | orchestrator | 2026-01-03 00:58:52.495431 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-03 00:58:52.495441 | orchestrator | Saturday 03 January 2026 00:58:36 +0000 (0:00:02.699) 0:02:52.021 ****** 2026-01-03 00:58:52.495452 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:58:52.495463 | orchestrator | 2026-01-03 00:58:52.495474 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-01-03 00:58:52.495484 | orchestrator | Saturday 03 January 2026 00:58:37 +0000 (0:00:00.538) 0:02:52.560 ****** 2026-01-03 00:58:52.495496 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:52.495506 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:52.495517 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:58:52.495528 | orchestrator | 2026-01-03 00:58:52.495539 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-01-03 00:58:52.495549 | orchestrator | Saturday 03 January 2026 00:58:39 +0000 (0:00:02.713) 0:02:55.273 ****** 2026-01-03 00:58:52.495560 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:52.495572 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:52.495584 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:58:52.495621 | orchestrator | 2026-01-03 00:58:52.495634 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-01-03 00:58:52.495647 | orchestrator | Saturday 03 January 2026 00:58:42 +0000 (0:00:02.624) 0:02:57.898 ****** 2026-01-03 00:58:52.495657 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:52.495682 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:52.495694 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:58:52.495706 | orchestrator | 2026-01-03 00:58:52.495718 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-01-03 00:58:52.495747 | orchestrator | Saturday 03 January 2026 00:58:44 +0000 (0:00:02.229) 0:03:00.128 ****** 2026-01-03 00:58:52.495760 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:52.495771 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:52.495782 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:58:52.495794 | orchestrator | 2026-01-03 00:58:52.495806 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-03 00:58:52.495816 | orchestrator | Saturday 03 January 2026 00:58:46 +0000 (0:00:01.912) 0:03:02.040 ****** 2026-01-03 00:58:52.495827 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:58:52.495837 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:58:52.495849 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:58:52.495859 | orchestrator | 2026-01-03 00:58:52.495871 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-03 00:58:52.495882 | orchestrator | Saturday 03 January 2026 00:58:49 +0000 (0:00:03.069) 0:03:05.109 ****** 2026-01-03 00:58:52.495894 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:58:52.495903 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:58:52.495914 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:58:52.495920 | orchestrator | 2026-01-03 00:58:52.495928 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:58:52.495937 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-03 00:58:52.495946 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-01-03 00:58:52.495954 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-03 00:58:52.495961 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-03 00:58:52.495970 | orchestrator | 2026-01-03 00:58:52.495979 | orchestrator | 2026-01-03 00:58:52.495988 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:58:52.495996 | orchestrator | Saturday 03 January 2026 00:58:49 +0000 (0:00:00.239) 0:03:05.349 ****** 2026-01-03 00:58:52.496006 | orchestrator | =============================================================================== 2026-01-03 00:58:52.496015 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 35.63s 2026-01-03 00:58:52.496025 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.20s 2026-01-03 00:58:52.496035 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.19s 2026-01-03 00:58:52.496046 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.96s 2026-01-03 00:58:52.496056 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.09s 2026-01-03 00:58:52.496080 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.91s 2026-01-03 00:58:52.496091 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.29s 2026-01-03 00:58:52.496101 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.06s 2026-01-03 00:58:52.496111 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.39s 2026-01-03 00:58:52.496121 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 4.38s 2026-01-03 00:58:52.496130 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.85s 2026-01-03 00:58:52.496140 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 3.78s 2026-01-03 00:58:52.496150 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.68s 2026-01-03 00:58:52.496160 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.32s 2026-01-03 00:58:52.496189 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.26s 2026-01-03 00:58:52.496199 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.07s 2026-01-03 00:58:52.496208 | orchestrator | Check MariaDB service --------------------------------------------------- 2.85s 2026-01-03 00:58:52.496218 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.79s 2026-01-03 00:58:52.496227 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.71s 2026-01-03 00:58:52.496235 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.70s 2026-01-03 00:58:52.496241 | orchestrator | 2026-01-03 00:58:52 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 00:58:52.496248 | orchestrator | 2026-01-03 00:58:52 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:58:52.496255 | orchestrator | 2026-01-03 00:58:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:55.540751 | orchestrator | 2026-01-03 00:58:55 | INFO  | Task b1ff1038-3509-4436-84d4-a80b0ff46d74 is in state STARTED 2026-01-03 00:58:55.540819 | orchestrator | 2026-01-03 00:58:55 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 00:58:55.540825 | orchestrator | 2026-01-03 00:58:55 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:58:55.540830 | orchestrator | 2026-01-03 00:58:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:58:58.576526 | orchestrator | 2026-01-03 00:58:58 | INFO  | Task b1ff1038-3509-4436-84d4-a80b0ff46d74 is in state STARTED 2026-01-03 00:58:58.578275 | orchestrator | 2026-01-03 00:58:58 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 00:58:58.580122 | orchestrator | 2026-01-03 00:58:58 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:58:58.580306 | orchestrator | 2026-01-03 00:58:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:01.625173 | orchestrator | 2026-01-03 00:59:01 | INFO  | Task b1ff1038-3509-4436-84d4-a80b0ff46d74 is in state STARTED 2026-01-03 00:59:01.625977 | orchestrator | 2026-01-03 00:59:01 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 00:59:01.627052 | orchestrator | 2026-01-03 00:59:01 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:59:01.627147 | orchestrator | 2026-01-03 00:59:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:04.674592 | orchestrator | 2026-01-03 00:59:04 | INFO  | Task b1ff1038-3509-4436-84d4-a80b0ff46d74 is in state STARTED 2026-01-03 00:59:04.674688 | orchestrator | 2026-01-03 00:59:04 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 00:59:04.675801 | orchestrator | 2026-01-03 00:59:04 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:59:04.675848 | orchestrator | 2026-01-03 00:59:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:07.713587 | orchestrator | 2026-01-03 00:59:07 | INFO  | Task b1ff1038-3509-4436-84d4-a80b0ff46d74 is in state STARTED 2026-01-03 00:59:07.713784 | orchestrator | 2026-01-03 00:59:07 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 00:59:07.714677 | orchestrator | 2026-01-03 00:59:07 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:59:07.714695 | orchestrator | 2026-01-03 00:59:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:10.754742 | orchestrator | 2026-01-03 00:59:10 | INFO  | Task b1ff1038-3509-4436-84d4-a80b0ff46d74 is in state STARTED 2026-01-03 00:59:10.756023 | orchestrator | 2026-01-03 00:59:10 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 00:59:10.757023 | orchestrator | 2026-01-03 00:59:10 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:59:10.757061 | orchestrator | 2026-01-03 00:59:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:13.800625 | orchestrator | 2026-01-03 00:59:13 | INFO  | Task b1ff1038-3509-4436-84d4-a80b0ff46d74 is in state STARTED 2026-01-03 00:59:13.801756 | orchestrator | 2026-01-03 00:59:13 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 00:59:13.804790 | orchestrator | 2026-01-03 00:59:13 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:59:13.804844 | orchestrator | 2026-01-03 00:59:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:16.846071 | orchestrator | 2026-01-03 00:59:16 | INFO  | Task b1ff1038-3509-4436-84d4-a80b0ff46d74 is in state STARTED 2026-01-03 00:59:16.846155 | orchestrator | 2026-01-03 00:59:16 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 00:59:16.846163 | orchestrator | 2026-01-03 00:59:16 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:59:16.846168 | orchestrator | 2026-01-03 00:59:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:19.879240 | orchestrator | 2026-01-03 00:59:19 | INFO  | Task b1ff1038-3509-4436-84d4-a80b0ff46d74 is in state STARTED 2026-01-03 00:59:19.879676 | orchestrator | 2026-01-03 00:59:19 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 00:59:19.880606 | orchestrator | 2026-01-03 00:59:19 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:59:19.880632 | orchestrator | 2026-01-03 00:59:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:22.924327 | orchestrator | 2026-01-03 00:59:22 | INFO  | Task b1ff1038-3509-4436-84d4-a80b0ff46d74 is in state STARTED 2026-01-03 00:59:22.925843 | orchestrator | 2026-01-03 00:59:22 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 00:59:22.927384 | orchestrator | 2026-01-03 00:59:22 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:59:22.927415 | orchestrator | 2026-01-03 00:59:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:25.964489 | orchestrator | 2026-01-03 00:59:25 | INFO  | Task b1ff1038-3509-4436-84d4-a80b0ff46d74 is in state STARTED 2026-01-03 00:59:25.966639 | orchestrator | 2026-01-03 00:59:25 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 00:59:25.968820 | orchestrator | 2026-01-03 00:59:25 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:59:25.970727 | orchestrator | 2026-01-03 00:59:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:29.019694 | orchestrator | 2026-01-03 00:59:29 | INFO  | Task b1ff1038-3509-4436-84d4-a80b0ff46d74 is in state STARTED 2026-01-03 00:59:29.022967 | orchestrator | 2026-01-03 00:59:29 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 00:59:29.025657 | orchestrator | 2026-01-03 00:59:29 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:59:29.026091 | orchestrator | 2026-01-03 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:32.066287 | orchestrator | 2026-01-03 00:59:32 | INFO  | Task b1ff1038-3509-4436-84d4-a80b0ff46d74 is in state STARTED 2026-01-03 00:59:32.067188 | orchestrator | 2026-01-03 00:59:32 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 00:59:32.068999 | orchestrator | 2026-01-03 00:59:32 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:59:32.069213 | orchestrator | 2026-01-03 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:35.107095 | orchestrator | 2026-01-03 00:59:35 | INFO  | Task b1ff1038-3509-4436-84d4-a80b0ff46d74 is in state STARTED 2026-01-03 00:59:35.108549 | orchestrator | 2026-01-03 00:59:35 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 00:59:35.110220 | orchestrator | 2026-01-03 00:59:35 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:59:35.110638 | orchestrator | 2026-01-03 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:38.150225 | orchestrator | 2026-01-03 00:59:38 | INFO  | Task b1ff1038-3509-4436-84d4-a80b0ff46d74 is in state STARTED 2026-01-03 00:59:38.152723 | orchestrator | 2026-01-03 00:59:38 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 00:59:38.154813 | orchestrator | 2026-01-03 00:59:38 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:59:38.154871 | orchestrator | 2026-01-03 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:41.191845 | orchestrator | 2026-01-03 00:59:41 | INFO  | Task b1ff1038-3509-4436-84d4-a80b0ff46d74 is in state STARTED 2026-01-03 00:59:41.191933 | orchestrator | 2026-01-03 00:59:41 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 00:59:41.192735 | orchestrator | 2026-01-03 00:59:41 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state STARTED 2026-01-03 00:59:41.192778 | orchestrator | 2026-01-03 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:44.237988 | orchestrator | 2026-01-03 00:59:44 | INFO  | Task b1ff1038-3509-4436-84d4-a80b0ff46d74 is in state STARTED 2026-01-03 00:59:44.241477 | orchestrator | 2026-01-03 00:59:44 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 00:59:44.246596 | orchestrator | 2026-01-03 00:59:44 | INFO  | Task 79cd9b52-a15a-4007-875b-826aa6b1787f is in state SUCCESS 2026-01-03 00:59:44.248037 | orchestrator | 2026-01-03 00:59:44.248089 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-03 00:59:44.248097 | orchestrator | 2.16.14 2026-01-03 00:59:44.248104 | orchestrator | 2026-01-03 00:59:44.248110 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-01-03 00:59:44.248146 | orchestrator | 2026-01-03 00:59:44.248155 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-03 00:59:44.248164 | orchestrator | Saturday 03 January 2026 00:57:35 +0000 (0:00:00.595) 0:00:00.595 ****** 2026-01-03 00:59:44.248170 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:59:44.248176 | orchestrator | 2026-01-03 00:59:44.248183 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-03 00:59:44.248188 | orchestrator | Saturday 03 January 2026 00:57:36 +0000 (0:00:00.653) 0:00:01.249 ****** 2026-01-03 00:59:44.248195 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:59:44.248202 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:59:44.248207 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:59:44.248212 | orchestrator | 2026-01-03 00:59:44.248218 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-03 00:59:44.248224 | orchestrator | Saturday 03 January 2026 00:57:36 +0000 (0:00:00.669) 0:00:01.918 ****** 2026-01-03 00:59:44.248229 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:59:44.248234 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:59:44.248482 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:59:44.248500 | orchestrator | 2026-01-03 00:59:44.248729 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-03 00:59:44.248743 | orchestrator | Saturday 03 January 2026 00:57:37 +0000 (0:00:00.310) 0:00:02.229 ****** 2026-01-03 00:59:44.248749 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:59:44.248755 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:59:44.248761 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:59:44.248767 | orchestrator | 2026-01-03 00:59:44.248773 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-03 00:59:44.248779 | orchestrator | Saturday 03 January 2026 00:57:38 +0000 (0:00:00.861) 0:00:03.090 ****** 2026-01-03 00:59:44.248785 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:59:44.248791 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:59:44.248796 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:59:44.248802 | orchestrator | 2026-01-03 00:59:44.248808 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-03 00:59:44.248814 | orchestrator | Saturday 03 January 2026 00:57:38 +0000 (0:00:00.324) 0:00:03.415 ****** 2026-01-03 00:59:44.248859 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:59:44.248865 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:59:44.248871 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:59:44.248877 | orchestrator | 2026-01-03 00:59:44.248883 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-03 00:59:44.248889 | orchestrator | Saturday 03 January 2026 00:57:38 +0000 (0:00:00.316) 0:00:03.731 ****** 2026-01-03 00:59:44.248895 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:59:44.248901 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:59:44.248907 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:59:44.248913 | orchestrator | 2026-01-03 00:59:44.248918 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-03 00:59:44.248925 | orchestrator | Saturday 03 January 2026 00:57:39 +0000 (0:00:00.324) 0:00:04.056 ****** 2026-01-03 00:59:44.248930 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.248937 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:59:44.248943 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:59:44.248948 | orchestrator | 2026-01-03 00:59:44.248954 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-03 00:59:44.248960 | orchestrator | Saturday 03 January 2026 00:57:39 +0000 (0:00:00.585) 0:00:04.641 ****** 2026-01-03 00:59:44.248966 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:59:44.248972 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:59:44.248977 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:59:44.248983 | orchestrator | 2026-01-03 00:59:44.248989 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-03 00:59:44.248995 | orchestrator | Saturday 03 January 2026 00:57:39 +0000 (0:00:00.288) 0:00:04.930 ****** 2026-01-03 00:59:44.249001 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-03 00:59:44.249006 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-03 00:59:44.249012 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-03 00:59:44.249076 | orchestrator | 2026-01-03 00:59:44.249081 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-03 00:59:44.249085 | orchestrator | Saturday 03 January 2026 00:57:40 +0000 (0:00:00.667) 0:00:05.597 ****** 2026-01-03 00:59:44.249090 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:59:44.249096 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:59:44.249104 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:59:44.249112 | orchestrator | 2026-01-03 00:59:44.249118 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-03 00:59:44.249124 | orchestrator | Saturday 03 January 2026 00:57:41 +0000 (0:00:00.511) 0:00:06.109 ****** 2026-01-03 00:59:44.249130 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-03 00:59:44.249136 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-03 00:59:44.249155 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-03 00:59:44.249163 | orchestrator | 2026-01-03 00:59:44.249169 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-03 00:59:44.249184 | orchestrator | Saturday 03 January 2026 00:57:43 +0000 (0:00:02.105) 0:00:08.214 ****** 2026-01-03 00:59:44.249323 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-03 00:59:44.249332 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-03 00:59:44.249339 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-03 00:59:44.249345 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.249351 | orchestrator | 2026-01-03 00:59:44.249382 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-03 00:59:44.249387 | orchestrator | Saturday 03 January 2026 00:57:43 +0000 (0:00:00.730) 0:00:08.945 ****** 2026-01-03 00:59:44.249392 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.249397 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.249401 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.249424 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.249430 | orchestrator | 2026-01-03 00:59:44.249436 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-03 00:59:44.249442 | orchestrator | Saturday 03 January 2026 00:57:44 +0000 (0:00:00.919) 0:00:09.864 ****** 2026-01-03 00:59:44.249476 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.249483 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.249488 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.249492 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.249496 | orchestrator | 2026-01-03 00:59:44.249500 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-03 00:59:44.249504 | orchestrator | Saturday 03 January 2026 00:57:45 +0000 (0:00:00.371) 0:00:10.236 ****** 2026-01-03 00:59:44.249508 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '3b82d4921609', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-03 00:57:41.796775', 'end': '2026-01-03 00:57:41.840689', 'delta': '0:00:00.043914', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3b82d4921609'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-03 00:59:44.249524 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd9476ec32c3d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-03 00:57:42.551009', 'end': '2026-01-03 00:57:42.578787', 'delta': '0:00:00.027778', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d9476ec32c3d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-03 00:59:44.249546 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '93612ee064d5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-03 00:57:43.029895', 'end': '2026-01-03 00:57:43.057998', 'delta': '0:00:00.028103', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['93612ee064d5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-03 00:59:44.249551 | orchestrator | 2026-01-03 00:59:44.249556 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-03 00:59:44.249563 | orchestrator | Saturday 03 January 2026 00:57:45 +0000 (0:00:00.221) 0:00:10.457 ****** 2026-01-03 00:59:44.249569 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:59:44.249574 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:59:44.249580 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:59:44.249586 | orchestrator | 2026-01-03 00:59:44.249621 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-03 00:59:44.249633 | orchestrator | Saturday 03 January 2026 00:57:45 +0000 (0:00:00.430) 0:00:10.887 ****** 2026-01-03 00:59:44.249639 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-03 00:59:44.249646 | orchestrator | 2026-01-03 00:59:44.249653 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-03 00:59:44.249657 | orchestrator | Saturday 03 January 2026 00:57:47 +0000 (0:00:01.882) 0:00:12.770 ****** 2026-01-03 00:59:44.249660 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.249664 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:59:44.249668 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:59:44.249672 | orchestrator | 2026-01-03 00:59:44.249676 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-03 00:59:44.249679 | orchestrator | Saturday 03 January 2026 00:57:48 +0000 (0:00:00.301) 0:00:13.072 ****** 2026-01-03 00:59:44.249683 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.249687 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:59:44.249693 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:59:44.249700 | orchestrator | 2026-01-03 00:59:44.249705 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-03 00:59:44.249711 | orchestrator | Saturday 03 January 2026 00:57:48 +0000 (0:00:00.422) 0:00:13.494 ****** 2026-01-03 00:59:44.249717 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.249723 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:59:44.249736 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:59:44.249742 | orchestrator | 2026-01-03 00:59:44.249748 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-03 00:59:44.249755 | orchestrator | Saturday 03 January 2026 00:57:48 +0000 (0:00:00.529) 0:00:14.023 ****** 2026-01-03 00:59:44.249760 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:59:44.249766 | orchestrator | 2026-01-03 00:59:44.249773 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-03 00:59:44.249779 | orchestrator | Saturday 03 January 2026 00:57:49 +0000 (0:00:00.147) 0:00:14.171 ****** 2026-01-03 00:59:44.249786 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.249792 | orchestrator | 2026-01-03 00:59:44.249799 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-03 00:59:44.249805 | orchestrator | Saturday 03 January 2026 00:57:49 +0000 (0:00:00.233) 0:00:14.405 ****** 2026-01-03 00:59:44.249812 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.249816 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:59:44.249820 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:59:44.249823 | orchestrator | 2026-01-03 00:59:44.249827 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-03 00:59:44.249842 | orchestrator | Saturday 03 January 2026 00:57:49 +0000 (0:00:00.312) 0:00:14.718 ****** 2026-01-03 00:59:44.249852 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.249858 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:59:44.249871 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:59:44.249877 | orchestrator | 2026-01-03 00:59:44.249884 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-03 00:59:44.249890 | orchestrator | Saturday 03 January 2026 00:57:49 +0000 (0:00:00.315) 0:00:15.033 ****** 2026-01-03 00:59:44.249896 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.249902 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:59:44.249908 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:59:44.249914 | orchestrator | 2026-01-03 00:59:44.249920 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-03 00:59:44.249926 | orchestrator | Saturday 03 January 2026 00:57:50 +0000 (0:00:00.640) 0:00:15.673 ****** 2026-01-03 00:59:44.249933 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.249939 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:59:44.249943 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:59:44.249947 | orchestrator | 2026-01-03 00:59:44.249951 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-03 00:59:44.249955 | orchestrator | Saturday 03 January 2026 00:57:50 +0000 (0:00:00.330) 0:00:16.004 ****** 2026-01-03 00:59:44.249960 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.249965 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:59:44.249971 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:59:44.249978 | orchestrator | 2026-01-03 00:59:44.249989 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-03 00:59:44.249996 | orchestrator | Saturday 03 January 2026 00:57:51 +0000 (0:00:00.337) 0:00:16.342 ****** 2026-01-03 00:59:44.250003 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.250010 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:59:44.250053 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:59:44.250089 | orchestrator | 2026-01-03 00:59:44.250094 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-03 00:59:44.250099 | orchestrator | Saturday 03 January 2026 00:57:51 +0000 (0:00:00.333) 0:00:16.676 ****** 2026-01-03 00:59:44.250105 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.250114 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:59:44.250121 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:59:44.250128 | orchestrator | 2026-01-03 00:59:44.250135 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-03 00:59:44.250141 | orchestrator | Saturday 03 January 2026 00:57:52 +0000 (0:00:00.516) 0:00:17.192 ****** 2026-01-03 00:59:44.250153 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--147f94e4--6564--5421--8ac2--dc0697a6d722-osd--block--147f94e4--6564--5421--8ac2--dc0697a6d722', 'dm-uuid-LVM-VugVWX0xLFMxWH3ZLd8vaBvk7vZ2V2buA0HTw5gwHTF0naug4r1MkKve5QW6RixC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--43909478--d18c--58e7--896e--8d0e3e550915-osd--block--43909478--d18c--58e7--896e--8d0e3e550915', 'dm-uuid-LVM-FVkTKVtS7jWHNPhNvjzCqcUnVH85HKsJj4Q1k4st1cUj1pSVsQIzOk6QwEbwnq3t'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250184 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250231 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250240 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36', 'scsi-SQEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part1', 'scsi-SQEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part14', 'scsi-SQEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part15', 'scsi-SQEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part16', 'scsi-SQEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:59:44.250269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--147f94e4--6564--5421--8ac2--dc0697a6d722-osd--block--147f94e4--6564--5421--8ac2--dc0697a6d722'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kxIVcu-OJkD-rOZq-i7F3-Q4SP-FZlO-Vqy1gX', 'scsi-0QEMU_QEMU_HARDDISK_87925512-ad51-4d26-92cc-5f354ec37d18', 'scsi-SQEMU_QEMU_HARDDISK_87925512-ad51-4d26-92cc-5f354ec37d18'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:59:44.250297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--43909478--d18c--58e7--896e--8d0e3e550915-osd--block--43909478--d18c--58e7--896e--8d0e3e550915'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V2wRCC-19G3-RH1r-a0C6-3PRc-2ZMZ-n8RGBh', 'scsi-0QEMU_QEMU_HARDDISK_5af48d4e-a5d9-4c77-9873-39f930691ccf', 'scsi-SQEMU_QEMU_HARDDISK_5af48d4e-a5d9-4c77-9873-39f930691ccf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:59:44.250310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f97db499--9f50--5724--b4de--324784fab4ab-osd--block--f97db499--9f50--5724--b4de--324784fab4ab', 'dm-uuid-LVM-ndWslIiwf71ppOn2x6PIQ7ad3SX2au6xrHhVoHdwaukJYuY2LCSJQQc8XYE8M5hH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b3b8b78-42ce-4774-a4e8-f10424aa2bf1', 'scsi-SQEMU_QEMU_HARDDISK_0b3b8b78-42ce-4774-a4e8-f10424aa2bf1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:59:44.250324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--293f14c0--405b--5b3a--a5c8--f3b182003048-osd--block--293f14c0--405b--5b3a--a5c8--f3b182003048', 'dm-uuid-LVM-vHFp7zBiIjDYUVrHy51ObCTIcOMIAAApdPtk289GqEZ1R1LWrnrb1JanU7nRdTxY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:59:44.250345 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250391 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.250398 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f', 'scsi-SQEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part1', 'scsi-SQEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part14', 'scsi-SQEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part15', 'scsi-SQEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part16', 'scsi-SQEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:59:44.250488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f97db499--9f50--5724--b4de--324784fab4ab-osd--block--f97db499--9f50--5724--b4de--324784fab4ab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Jop0PX-Eq53-g3X6-3Znd-ivQs-Y4IJ-HbxiEU', 'scsi-0QEMU_QEMU_HARDDISK_38f41ea3-c1f8-47b6-a316-62c713a7ab6d', 'scsi-SQEMU_QEMU_HARDDISK_38f41ea3-c1f8-47b6-a316-62c713a7ab6d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:59:44.250546 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--293f14c0--405b--5b3a--a5c8--f3b182003048-osd--block--293f14c0--405b--5b3a--a5c8--f3b182003048'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-K3GfH4-OdAN-3DHR-yzja-SS13-wYuw-r2xFkO', 'scsi-0QEMU_QEMU_HARDDISK_65a14f46-a6f7-4da8-aafc-46a47f969b79', 'scsi-SQEMU_QEMU_HARDDISK_65a14f46-a6f7-4da8-aafc-46a47f969b79'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:59:44.250554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5f15048-4e23-4094-a7eb-216bc02a3879', 'scsi-SQEMU_QEMU_HARDDISK_e5f15048-4e23-4094-a7eb-216bc02a3879'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:59:44.250561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:59:44.250567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--124077fc--a709--5275--a3b4--8defea20aa20-osd--block--124077fc--a709--5275--a3b4--8defea20aa20', 'dm-uuid-LVM-U2TEeftFf5xZXlCo0y92bsW3URepmFeBXAPfH071IwwGzLi5nNcl8XZwFDHpfI9x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250578 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:59:44.250592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--43153f84--c643--5017--9328--2bdcf330b780-osd--block--43153f84--c643--5017--9328--2bdcf330b780', 'dm-uuid-LVM-Z4AbDkyKPDUKoreVahmAvvrYi0XeSRDNay6MC5Whtl4BZWLLAoaAKysVy8GjWJLt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250599 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250625 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-03 00:59:44.250663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca', 'scsi-SQEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part1', 'scsi-SQEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part14', 'scsi-SQEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part15', 'scsi-SQEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part16', 'scsi-SQEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:59:44.250670 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--124077fc--a709--5275--a3b4--8defea20aa20-osd--block--124077fc--a709--5275--a3b4--8defea20aa20'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AYb26I-Z0TP-zaBQ-KZeQ-o9xr-ugwk-L4kNfq', 'scsi-0QEMU_QEMU_HARDDISK_5710b3c7-cfac-4b3c-9149-eeb74f32a79f', 'scsi-SQEMU_QEMU_HARDDISK_5710b3c7-cfac-4b3c-9149-eeb74f32a79f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:59:44.250677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--43153f84--c643--5017--9328--2bdcf330b780-osd--block--43153f84--c643--5017--9328--2bdcf330b780'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-O1Nw5l-Gl6i-Mupu-1XpS-ZTPb-TeOH-HFVJaH', 'scsi-0QEMU_QEMU_HARDDISK_d4e0be62-f642-4458-ac3b-093009378a3c', 'scsi-SQEMU_QEMU_HARDDISK_d4e0be62-f642-4458-ac3b-093009378a3c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:59:44.250683 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55b36885-901f-4155-8165-44f8903e4943', 'scsi-SQEMU_QEMU_HARDDISK_55b36885-901f-4155-8165-44f8903e4943'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:59:44.250700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-03 00:59:44.250707 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:59:44.250714 | orchestrator | 2026-01-03 00:59:44.250720 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-03 00:59:44.250727 | orchestrator | Saturday 03 January 2026 00:57:52 +0000 (0:00:00.598) 0:00:17.791 ****** 2026-01-03 00:59:44.250734 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--147f94e4--6564--5421--8ac2--dc0697a6d722-osd--block--147f94e4--6564--5421--8ac2--dc0697a6d722', 'dm-uuid-LVM-VugVWX0xLFMxWH3ZLd8vaBvk7vZ2V2buA0HTw5gwHTF0naug4r1MkKve5QW6RixC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250741 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--43909478--d18c--58e7--896e--8d0e3e550915-osd--block--43909478--d18c--58e7--896e--8d0e3e550915', 'dm-uuid-LVM-FVkTKVtS7jWHNPhNvjzCqcUnVH85HKsJj4Q1k4st1cUj1pSVsQIzOk6QwEbwnq3t'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250747 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250755 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250769 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250781 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250788 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250804 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250811 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250817 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250835 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36', 'scsi-SQEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part1', 'scsi-SQEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part14', 'scsi-SQEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part15', 'scsi-SQEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part16', 'scsi-SQEMU_QEMU_HARDDISK_dc3a99a0-d1c9-4d4c-8382-3504c476ee36-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250844 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f97db499--9f50--5724--b4de--324784fab4ab-osd--block--f97db499--9f50--5724--b4de--324784fab4ab', 'dm-uuid-LVM-ndWslIiwf71ppOn2x6PIQ7ad3SX2au6xrHhVoHdwaukJYuY2LCSJQQc8XYE8M5hH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250851 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--147f94e4--6564--5421--8ac2--dc0697a6d722-osd--block--147f94e4--6564--5421--8ac2--dc0697a6d722'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kxIVcu-OJkD-rOZq-i7F3-Q4SP-FZlO-Vqy1gX', 'scsi-0QEMU_QEMU_HARDDISK_87925512-ad51-4d26-92cc-5f354ec37d18', 'scsi-SQEMU_QEMU_HARDDISK_87925512-ad51-4d26-92cc-5f354ec37d18'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250863 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--43909478--d18c--58e7--896e--8d0e3e550915-osd--block--43909478--d18c--58e7--896e--8d0e3e550915'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V2wRCC-19G3-RH1r-a0C6-3PRc-2ZMZ-n8RGBh', 'scsi-0QEMU_QEMU_HARDDISK_5af48d4e-a5d9-4c77-9873-39f930691ccf', 'scsi-SQEMU_QEMU_HARDDISK_5af48d4e-a5d9-4c77-9873-39f930691ccf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250879 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--293f14c0--405b--5b3a--a5c8--f3b182003048-osd--block--293f14c0--405b--5b3a--a5c8--f3b182003048', 'dm-uuid-LVM-vHFp7zBiIjDYUVrHy51ObCTIcOMIAAApdPtk289GqEZ1R1LWrnrb1JanU7nRdTxY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250887 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b3b8b78-42ce-4774-a4e8-f10424aa2bf1', 'scsi-SQEMU_QEMU_HARDDISK_0b3b8b78-42ce-4774-a4e8-f10424aa2bf1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250894 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250901 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250913 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250919 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.250928 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250940 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250947 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250954 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250961 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250967 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250979 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--124077fc--a709--5275--a3b4--8defea20aa20-osd--block--124077fc--a709--5275--a3b4--8defea20aa20', 'dm-uuid-LVM-U2TEeftFf5xZXlCo0y92bsW3URepmFeBXAPfH071IwwGzLi5nNcl8XZwFDHpfI9x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.250995 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f', 'scsi-SQEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part1', 'scsi-SQEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part14', 'scsi-SQEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part15', 'scsi-SQEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part16', 'scsi-SQEMU_QEMU_HARDDISK_84db6a0e-1808-47c1-b10d-bd69b23c363f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.251004 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--43153f84--c643--5017--9328--2bdcf330b780-osd--block--43153f84--c643--5017--9328--2bdcf330b780', 'dm-uuid-LVM-Z4AbDkyKPDUKoreVahmAvvrYi0XeSRDNay6MC5Whtl4BZWLLAoaAKysVy8GjWJLt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.251015 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f97db499--9f50--5724--b4de--324784fab4ab-osd--block--f97db499--9f50--5724--b4de--324784fab4ab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Jop0PX-Eq53-g3X6-3Znd-ivQs-Y4IJ-HbxiEU', 'scsi-0QEMU_QEMU_HARDDISK_38f41ea3-c1f8-47b6-a316-62c713a7ab6d', 'scsi-SQEMU_QEMU_HARDDISK_38f41ea3-c1f8-47b6-a316-62c713a7ab6d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.251029 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.251035 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--293f14c0--405b--5b3a--a5c8--f3b182003048-osd--block--293f14c0--405b--5b3a--a5c8--f3b182003048'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-K3GfH4-OdAN-3DHR-yzja-SS13-wYuw-r2xFkO', 'scsi-0QEMU_QEMU_HARDDISK_65a14f46-a6f7-4da8-aafc-46a47f969b79', 'scsi-SQEMU_QEMU_HARDDISK_65a14f46-a6f7-4da8-aafc-46a47f969b79'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.251042 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.251049 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5f15048-4e23-4094-a7eb-216bc02a3879', 'scsi-SQEMU_QEMU_HARDDISK_e5f15048-4e23-4094-a7eb-216bc02a3879'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.251059 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.251066 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.251078 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.251085 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:59:44.251091 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.251097 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.251122 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.251133 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.251146 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca', 'scsi-SQEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part1', 'scsi-SQEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part14', 'scsi-SQEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part15', 'scsi-SQEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part16', 'scsi-SQEMU_QEMU_HARDDISK_0efbb5a5-d679-4c29-9bc8-28d64ba0e8ca-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.251153 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--124077fc--a709--5275--a3b4--8defea20aa20-osd--block--124077fc--a709--5275--a3b4--8defea20aa20'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AYb26I-Z0TP-zaBQ-KZeQ-o9xr-ugwk-L4kNfq', 'scsi-0QEMU_QEMU_HARDDISK_5710b3c7-cfac-4b3c-9149-eeb74f32a79f', 'scsi-SQEMU_QEMU_HARDDISK_5710b3c7-cfac-4b3c-9149-eeb74f32a79f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.251163 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--43153f84--c643--5017--9328--2bdcf330b780-osd--block--43153f84--c643--5017--9328--2bdcf330b780'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-O1Nw5l-Gl6i-Mupu-1XpS-ZTPb-TeOH-HFVJaH', 'scsi-0QEMU_QEMU_HARDDISK_d4e0be62-f642-4458-ac3b-093009378a3c', 'scsi-SQEMU_QEMU_HARDDISK_d4e0be62-f642-4458-ac3b-093009378a3c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.251169 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55b36885-901f-4155-8165-44f8903e4943', 'scsi-SQEMU_QEMU_HARDDISK_55b36885-901f-4155-8165-44f8903e4943'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.251183 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-03-00-03-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-03 00:59:44.251190 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:59:44.251196 | orchestrator | 2026-01-03 00:59:44.251202 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-03 00:59:44.251209 | orchestrator | Saturday 03 January 2026 00:57:53 +0000 (0:00:00.607) 0:00:18.398 ****** 2026-01-03 00:59:44.251216 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:59:44.251223 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:59:44.251229 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:59:44.251235 | orchestrator | 2026-01-03 00:59:44.251242 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-03 00:59:44.251248 | orchestrator | Saturday 03 January 2026 00:57:54 +0000 (0:00:00.833) 0:00:19.232 ****** 2026-01-03 00:59:44.251255 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:59:44.251262 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:59:44.251268 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:59:44.251275 | orchestrator | 2026-01-03 00:59:44.251281 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-03 00:59:44.251287 | orchestrator | Saturday 03 January 2026 00:57:54 +0000 (0:00:00.566) 0:00:19.798 ****** 2026-01-03 00:59:44.251293 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:59:44.251299 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:59:44.251311 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:59:44.251318 | orchestrator | 2026-01-03 00:59:44.251325 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-03 00:59:44.251332 | orchestrator | Saturday 03 January 2026 00:57:55 +0000 (0:00:00.684) 0:00:20.483 ****** 2026-01-03 00:59:44.251339 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.251345 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:59:44.251352 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:59:44.251358 | orchestrator | 2026-01-03 00:59:44.251364 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-03 00:59:44.251370 | orchestrator | Saturday 03 January 2026 00:57:55 +0000 (0:00:00.306) 0:00:20.789 ****** 2026-01-03 00:59:44.251376 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.251383 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:59:44.251389 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:59:44.251396 | orchestrator | 2026-01-03 00:59:44.251403 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-03 00:59:44.251443 | orchestrator | Saturday 03 January 2026 00:57:56 +0000 (0:00:00.428) 0:00:21.217 ****** 2026-01-03 00:59:44.251450 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.251456 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:59:44.251461 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:59:44.251467 | orchestrator | 2026-01-03 00:59:44.251474 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-03 00:59:44.251480 | orchestrator | Saturday 03 January 2026 00:57:56 +0000 (0:00:00.554) 0:00:21.772 ****** 2026-01-03 00:59:44.251487 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-03 00:59:44.251493 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-03 00:59:44.251499 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-03 00:59:44.251505 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-03 00:59:44.251511 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-03 00:59:44.251517 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-03 00:59:44.251523 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-03 00:59:44.251530 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-03 00:59:44.251537 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-03 00:59:44.251543 | orchestrator | 2026-01-03 00:59:44.251550 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-03 00:59:44.251557 | orchestrator | Saturday 03 January 2026 00:57:57 +0000 (0:00:00.821) 0:00:22.593 ****** 2026-01-03 00:59:44.251564 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-03 00:59:44.251570 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-03 00:59:44.251576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-03 00:59:44.251583 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.251588 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-03 00:59:44.251595 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-03 00:59:44.251602 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-03 00:59:44.251609 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:59:44.251616 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-03 00:59:44.251622 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-03 00:59:44.251629 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-03 00:59:44.251636 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:59:44.251643 | orchestrator | 2026-01-03 00:59:44.251650 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-03 00:59:44.251657 | orchestrator | Saturday 03 January 2026 00:57:57 +0000 (0:00:00.369) 0:00:22.963 ****** 2026-01-03 00:59:44.251670 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 00:59:44.251685 | orchestrator | 2026-01-03 00:59:44.251693 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-03 00:59:44.251701 | orchestrator | Saturday 03 January 2026 00:57:58 +0000 (0:00:00.763) 0:00:23.726 ****** 2026-01-03 00:59:44.251714 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.251721 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:59:44.251728 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:59:44.251735 | orchestrator | 2026-01-03 00:59:44.251742 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-03 00:59:44.251749 | orchestrator | Saturday 03 January 2026 00:57:59 +0000 (0:00:00.331) 0:00:24.058 ****** 2026-01-03 00:59:44.251755 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.251762 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:59:44.251768 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:59:44.251774 | orchestrator | 2026-01-03 00:59:44.251780 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-03 00:59:44.251787 | orchestrator | Saturday 03 January 2026 00:57:59 +0000 (0:00:00.320) 0:00:24.379 ****** 2026-01-03 00:59:44.251793 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.251800 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:59:44.251807 | orchestrator | skipping: [testbed-node-5] 2026-01-03 00:59:44.251813 | orchestrator | 2026-01-03 00:59:44.251819 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-03 00:59:44.251826 | orchestrator | Saturday 03 January 2026 00:57:59 +0000 (0:00:00.347) 0:00:24.727 ****** 2026-01-03 00:59:44.251832 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:59:44.251839 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:59:44.251845 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:59:44.251849 | orchestrator | 2026-01-03 00:59:44.251853 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-03 00:59:44.251857 | orchestrator | Saturday 03 January 2026 00:58:00 +0000 (0:00:00.651) 0:00:25.378 ****** 2026-01-03 00:59:44.251861 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:59:44.251865 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:59:44.251869 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:59:44.251873 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.251876 | orchestrator | 2026-01-03 00:59:44.251880 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-03 00:59:44.251884 | orchestrator | Saturday 03 January 2026 00:58:00 +0000 (0:00:00.385) 0:00:25.764 ****** 2026-01-03 00:59:44.251888 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:59:44.251892 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:59:44.251895 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:59:44.251899 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.251903 | orchestrator | 2026-01-03 00:59:44.251907 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-03 00:59:44.251911 | orchestrator | Saturday 03 January 2026 00:58:01 +0000 (0:00:00.417) 0:00:26.181 ****** 2026-01-03 00:59:44.251915 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-03 00:59:44.251919 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-03 00:59:44.251922 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-03 00:59:44.251926 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.251930 | orchestrator | 2026-01-03 00:59:44.251934 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-03 00:59:44.251938 | orchestrator | Saturday 03 January 2026 00:58:01 +0000 (0:00:00.378) 0:00:26.559 ****** 2026-01-03 00:59:44.251942 | orchestrator | ok: [testbed-node-3] 2026-01-03 00:59:44.251945 | orchestrator | ok: [testbed-node-4] 2026-01-03 00:59:44.251949 | orchestrator | ok: [testbed-node-5] 2026-01-03 00:59:44.251957 | orchestrator | 2026-01-03 00:59:44.251961 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-03 00:59:44.251965 | orchestrator | Saturday 03 January 2026 00:58:01 +0000 (0:00:00.323) 0:00:26.883 ****** 2026-01-03 00:59:44.251971 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-03 00:59:44.251977 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-03 00:59:44.251983 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-03 00:59:44.251990 | orchestrator | 2026-01-03 00:59:44.251996 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-03 00:59:44.252002 | orchestrator | Saturday 03 January 2026 00:58:02 +0000 (0:00:00.505) 0:00:27.388 ****** 2026-01-03 00:59:44.252009 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-03 00:59:44.252015 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-03 00:59:44.252022 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-03 00:59:44.252028 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-03 00:59:44.252034 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-03 00:59:44.252040 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-03 00:59:44.252047 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-03 00:59:44.252053 | orchestrator | 2026-01-03 00:59:44.252059 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-03 00:59:44.252066 | orchestrator | Saturday 03 January 2026 00:58:03 +0000 (0:00:01.036) 0:00:28.425 ****** 2026-01-03 00:59:44.252072 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-03 00:59:44.252078 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-03 00:59:44.252088 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-03 00:59:44.252095 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-03 00:59:44.252102 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-03 00:59:44.252108 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-03 00:59:44.252119 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-03 00:59:44.252126 | orchestrator | 2026-01-03 00:59:44.252132 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-01-03 00:59:44.252138 | orchestrator | Saturday 03 January 2026 00:58:05 +0000 (0:00:02.073) 0:00:30.498 ****** 2026-01-03 00:59:44.252144 | orchestrator | skipping: [testbed-node-3] 2026-01-03 00:59:44.252150 | orchestrator | skipping: [testbed-node-4] 2026-01-03 00:59:44.252157 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-01-03 00:59:44.252163 | orchestrator | 2026-01-03 00:59:44.252170 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-01-03 00:59:44.252177 | orchestrator | Saturday 03 January 2026 00:58:05 +0000 (0:00:00.361) 0:00:30.860 ****** 2026-01-03 00:59:44.252184 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-03 00:59:44.252191 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-03 00:59:44.252198 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-03 00:59:44.252209 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-03 00:59:44.252215 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-03 00:59:44.252222 | orchestrator | 2026-01-03 00:59:44.252228 | orchestrator | TASK [generate keys] *********************************************************** 2026-01-03 00:59:44.252234 | orchestrator | Saturday 03 January 2026 00:58:50 +0000 (0:00:44.913) 0:01:15.773 ****** 2026-01-03 00:59:44.252240 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:59:44.252247 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:59:44.252254 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:59:44.252260 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:59:44.252267 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:59:44.252274 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:59:44.252281 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-01-03 00:59:44.252288 | orchestrator | 2026-01-03 00:59:44.252295 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-01-03 00:59:44.252302 | orchestrator | Saturday 03 January 2026 00:59:14 +0000 (0:00:23.533) 0:01:39.306 ****** 2026-01-03 00:59:44.252309 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:59:44.252316 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:59:44.252323 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:59:44.252330 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:59:44.252337 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:59:44.252344 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:59:44.252350 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-03 00:59:44.252356 | orchestrator | 2026-01-03 00:59:44.252362 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-01-03 00:59:44.252369 | orchestrator | Saturday 03 January 2026 00:59:26 +0000 (0:00:11.895) 0:01:51.202 ****** 2026-01-03 00:59:44.252375 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:59:44.252386 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-03 00:59:44.252393 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-03 00:59:44.252399 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:59:44.252419 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-03 00:59:44.252430 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-03 00:59:44.252436 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:59:44.252442 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-03 00:59:44.252455 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-03 00:59:44.252462 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:59:44.252468 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-03 00:59:44.252474 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-03 00:59:44.252480 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:59:44.252496 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-03 00:59:44.252503 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-03 00:59:44.252516 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-03 00:59:44.252523 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-03 00:59:44.252530 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-03 00:59:44.252537 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-01-03 00:59:44.252544 | orchestrator | 2026-01-03 00:59:44.252551 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:59:44.252557 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-03 00:59:44.252564 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-03 00:59:44.252571 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-03 00:59:44.252577 | orchestrator | 2026-01-03 00:59:44.252584 | orchestrator | 2026-01-03 00:59:44.252590 | orchestrator | 2026-01-03 00:59:44.252596 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:59:44.252642 | orchestrator | Saturday 03 January 2026 00:59:42 +0000 (0:00:16.555) 0:02:07.757 ****** 2026-01-03 00:59:44.252653 | orchestrator | =============================================================================== 2026-01-03 00:59:44.252660 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.91s 2026-01-03 00:59:44.252666 | orchestrator | generate keys ---------------------------------------------------------- 23.53s 2026-01-03 00:59:44.252672 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 16.56s 2026-01-03 00:59:44.252678 | orchestrator | get keys from monitors ------------------------------------------------- 11.90s 2026-01-03 00:59:44.252684 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.11s 2026-01-03 00:59:44.252691 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.07s 2026-01-03 00:59:44.252697 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.88s 2026-01-03 00:59:44.252703 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.04s 2026-01-03 00:59:44.252710 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.92s 2026-01-03 00:59:44.252716 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.86s 2026-01-03 00:59:44.252722 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.83s 2026-01-03 00:59:44.252728 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.82s 2026-01-03 00:59:44.252735 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.76s 2026-01-03 00:59:44.252741 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.73s 2026-01-03 00:59:44.252748 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.68s 2026-01-03 00:59:44.252755 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.67s 2026-01-03 00:59:44.252761 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.67s 2026-01-03 00:59:44.252774 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.65s 2026-01-03 00:59:44.252781 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.65s 2026-01-03 00:59:44.252787 | orchestrator | ceph-facts : Set_fact build devices from resolved symlinks -------------- 0.64s 2026-01-03 00:59:44.252793 | orchestrator | 2026-01-03 00:59:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:47.308702 | orchestrator | 2026-01-03 00:59:47 | INFO  | Task b1ff1038-3509-4436-84d4-a80b0ff46d74 is in state STARTED 2026-01-03 00:59:47.311029 | orchestrator | 2026-01-03 00:59:47 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 00:59:47.315521 | orchestrator | 2026-01-03 00:59:47 | INFO  | Task 2a6908b0-61c5-467d-b829-46ae73da501f is in state STARTED 2026-01-03 00:59:47.315574 | orchestrator | 2026-01-03 00:59:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:50.354139 | orchestrator | 2026-01-03 00:59:50 | INFO  | Task b1ff1038-3509-4436-84d4-a80b0ff46d74 is in state STARTED 2026-01-03 00:59:50.357174 | orchestrator | 2026-01-03 00:59:50 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 00:59:50.358815 | orchestrator | 2026-01-03 00:59:50 | INFO  | Task 2a6908b0-61c5-467d-b829-46ae73da501f is in state STARTED 2026-01-03 00:59:50.358866 | orchestrator | 2026-01-03 00:59:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:53.397878 | orchestrator | 2026-01-03 00:59:53 | INFO  | Task e95568eb-68fb-45a2-a357-06ac84d9340c is in state STARTED 2026-01-03 00:59:53.398391 | orchestrator | 2026-01-03 00:59:53 | INFO  | Task d4e2be00-6087-4eff-a178-bfcec6b52ff3 is in state STARTED 2026-01-03 00:59:53.402147 | orchestrator | 2026-01-03 00:59:53 | INFO  | Task b1ff1038-3509-4436-84d4-a80b0ff46d74 is in state SUCCESS 2026-01-03 00:59:53.404575 | orchestrator | 2026-01-03 00:59:53 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 00:59:53.405247 | orchestrator | 2026-01-03 00:59:53.405270 | orchestrator | 2026-01-03 00:59:53.405278 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 00:59:53.405286 | orchestrator | 2026-01-03 00:59:53.405293 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 00:59:53.405300 | orchestrator | Saturday 03 January 2026 00:58:54 +0000 (0:00:00.254) 0:00:00.254 ****** 2026-01-03 00:59:53.405307 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:59:53.405314 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:59:53.405321 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:59:53.405328 | orchestrator | 2026-01-03 00:59:53.405335 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 00:59:53.405342 | orchestrator | Saturday 03 January 2026 00:58:54 +0000 (0:00:00.259) 0:00:00.513 ****** 2026-01-03 00:59:53.405382 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-03 00:59:53.405389 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-03 00:59:53.405395 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-03 00:59:53.405401 | orchestrator | 2026-01-03 00:59:53.405407 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-01-03 00:59:53.405413 | orchestrator | 2026-01-03 00:59:53.405420 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-03 00:59:53.405460 | orchestrator | Saturday 03 January 2026 00:58:55 +0000 (0:00:00.330) 0:00:00.844 ****** 2026-01-03 00:59:53.405469 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:59:53.405477 | orchestrator | 2026-01-03 00:59:53.405483 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-01-03 00:59:53.405490 | orchestrator | Saturday 03 January 2026 00:58:55 +0000 (0:00:00.455) 0:00:01.300 ****** 2026-01-03 00:59:53.405585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-03 00:59:53.405613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-03 00:59:53.405835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-03 00:59:53.405873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-03 00:59:53.405881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-03 00:59:53.405895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-03 00:59:53.405902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:59:53.405913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:59:53.405921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:59:53.405928 | orchestrator | 2026-01-03 00:59:53.405935 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-01-03 00:59:53.405971 | orchestrator | Saturday 03 January 2026 00:58:57 +0000 (0:00:01.874) 0:00:03.175 ****** 2026-01-03 00:59:53.405979 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:53.405986 | orchestrator | 2026-01-03 00:59:53.405997 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-01-03 00:59:53.406004 | orchestrator | Saturday 03 January 2026 00:58:57 +0000 (0:00:00.121) 0:00:03.296 ****** 2026-01-03 00:59:53.406011 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:53.406044 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:53.406051 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:53.406058 | orchestrator | 2026-01-03 00:59:53.406064 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-01-03 00:59:53.406071 | orchestrator | Saturday 03 January 2026 00:58:57 +0000 (0:00:00.385) 0:00:03.682 ****** 2026-01-03 00:59:53.406078 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-03 00:59:53.406084 | orchestrator | 2026-01-03 00:59:53.406091 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-03 00:59:53.406103 | orchestrator | Saturday 03 January 2026 00:58:58 +0000 (0:00:00.746) 0:00:04.428 ****** 2026-01-03 00:59:53.406110 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 00:59:53.406117 | orchestrator | 2026-01-03 00:59:53.406124 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-01-03 00:59:53.406130 | orchestrator | Saturday 03 January 2026 00:58:59 +0000 (0:00:00.507) 0:00:04.936 ****** 2026-01-03 00:59:53.406138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-03 00:59:53.406146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-03 00:59:53.406157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-03 00:59:53.406170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-03 00:59:53.406181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-03 00:59:53.406188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-03 00:59:53.406195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:59:53.406202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:59:53.406211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:59:53.406218 | orchestrator | 2026-01-03 00:59:53.406225 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-01-03 00:59:53.406232 | orchestrator | Saturday 03 January 2026 00:59:02 +0000 (0:00:03.601) 0:00:08.538 ****** 2026-01-03 00:59:53.406244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-03 00:59:53.406256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:59:53.406263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:59:53.406270 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:53.406278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-03 00:59:53.406288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:59:53.406295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:59:53.406306 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:53.406318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-03 00:59:53.406326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:59:53.406333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:59:53.406559 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:53.406572 | orchestrator | 2026-01-03 00:59:53.406579 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-01-03 00:59:53.406586 | orchestrator | Saturday 03 January 2026 00:59:03 +0000 (0:00:00.891) 0:00:09.429 ****** 2026-01-03 00:59:53.406597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-03 00:59:53.406605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:59:53.406634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:59:53.406642 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:53.406650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-03 00:59:53.406657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:59:53.406664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:59:53.406671 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:53.406681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-03 00:59:53.406707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:59:53.406715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:59:53.406722 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:53.406729 | orchestrator | 2026-01-03 00:59:53.406736 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-01-03 00:59:53.406743 | orchestrator | Saturday 03 January 2026 00:59:04 +0000 (0:00:00.796) 0:00:10.226 ****** 2026-01-03 00:59:53.406750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-03 00:59:53.406761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-03 00:59:53.406790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-03 00:59:53.406799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-03 00:59:53.406806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-03 00:59:53.406814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-03 00:59:53.406821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:59:53.406831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:59:53.406855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:59:53.406863 | orchestrator | 2026-01-03 00:59:53.406870 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-01-03 00:59:53.406877 | orchestrator | Saturday 03 January 2026 00:59:07 +0000 (0:00:02.854) 0:00:13.080 ****** 2026-01-03 00:59:53.406898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-03 00:59:53.406906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:59:53.406914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-03 00:59:53.406924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:59:53.406950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-03 00:59:53.406958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:59:53.406965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:59:53.406972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:59:53.406979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:59:53.406986 | orchestrator | 2026-01-03 00:59:53.406997 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-01-03 00:59:53.407004 | orchestrator | Saturday 03 January 2026 00:59:12 +0000 (0:00:05.116) 0:00:18.197 ****** 2026-01-03 00:59:53.407011 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:59:53.407018 | orchestrator | changed: [testbed-node-1] 2026-01-03 00:59:53.407025 | orchestrator | changed: [testbed-node-2] 2026-01-03 00:59:53.407031 | orchestrator | 2026-01-03 00:59:53.407038 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-01-03 00:59:53.407047 | orchestrator | Saturday 03 January 2026 00:59:13 +0000 (0:00:01.519) 0:00:19.716 ****** 2026-01-03 00:59:53.407054 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:53.407061 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:53.407067 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:53.407073 | orchestrator | 2026-01-03 00:59:53.407079 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-01-03 00:59:53.407086 | orchestrator | Saturday 03 January 2026 00:59:14 +0000 (0:00:00.689) 0:00:20.406 ****** 2026-01-03 00:59:53.407093 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:53.407099 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:53.407105 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:53.407112 | orchestrator | 2026-01-03 00:59:53.407119 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-01-03 00:59:53.407126 | orchestrator | Saturday 03 January 2026 00:59:14 +0000 (0:00:00.306) 0:00:20.713 ****** 2026-01-03 00:59:53.407133 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:53.407140 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:53.407147 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:53.407154 | orchestrator | 2026-01-03 00:59:53.407161 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-01-03 00:59:53.407167 | orchestrator | Saturday 03 January 2026 00:59:15 +0000 (0:00:00.541) 0:00:21.254 ****** 2026-01-03 00:59:53.407195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-03 00:59:53.407205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:59:53.407214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:59:53.407231 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:53.407245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-03 00:59:53.407295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:59:53.407324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:59:53.407333 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:53.407341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-03 00:59:53.407348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-03 00:59:53.407361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-03 00:59:53.407371 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:53.407380 | orchestrator | 2026-01-03 00:59:53.407386 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-03 00:59:53.407393 | orchestrator | Saturday 03 January 2026 00:59:16 +0000 (0:00:00.610) 0:00:21.864 ****** 2026-01-03 00:59:53.407403 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:53.407412 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:53.407418 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:53.407425 | orchestrator | 2026-01-03 00:59:53.407452 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-01-03 00:59:53.407459 | orchestrator | Saturday 03 January 2026 00:59:16 +0000 (0:00:00.292) 0:00:22.157 ****** 2026-01-03 00:59:53.407469 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-03 00:59:53.407477 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-03 00:59:53.407483 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-03 00:59:53.407490 | orchestrator | 2026-01-03 00:59:53.407497 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-01-03 00:59:53.407504 | orchestrator | Saturday 03 January 2026 00:59:18 +0000 (0:00:01.693) 0:00:23.850 ****** 2026-01-03 00:59:53.407510 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-03 00:59:53.407517 | orchestrator | 2026-01-03 00:59:53.407524 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-01-03 00:59:53.407531 | orchestrator | Saturday 03 January 2026 00:59:18 +0000 (0:00:00.921) 0:00:24.771 ****** 2026-01-03 00:59:53.407537 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:53.407544 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:53.407551 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:53.407557 | orchestrator | 2026-01-03 00:59:53.407564 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-01-03 00:59:53.407571 | orchestrator | Saturday 03 January 2026 00:59:19 +0000 (0:00:00.884) 0:00:25.656 ****** 2026-01-03 00:59:53.407578 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-03 00:59:53.407584 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-03 00:59:53.407591 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-03 00:59:53.407598 | orchestrator | 2026-01-03 00:59:53.407604 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-01-03 00:59:53.407615 | orchestrator | Saturday 03 January 2026 00:59:21 +0000 (0:00:01.231) 0:00:26.888 ****** 2026-01-03 00:59:53.407622 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:59:53.407629 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:59:53.407636 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:59:53.407643 | orchestrator | 2026-01-03 00:59:53.407650 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-01-03 00:59:53.407657 | orchestrator | Saturday 03 January 2026 00:59:21 +0000 (0:00:00.318) 0:00:27.207 ****** 2026-01-03 00:59:53.407664 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-03 00:59:53.407675 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-03 00:59:53.407682 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-03 00:59:53.407689 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-03 00:59:53.407696 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-03 00:59:53.407754 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-03 00:59:53.407762 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-03 00:59:53.407769 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-03 00:59:53.407776 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-03 00:59:53.407782 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-03 00:59:53.407790 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-03 00:59:53.407796 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-03 00:59:53.407803 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-03 00:59:53.407810 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-03 00:59:53.407817 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-03 00:59:53.407824 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-03 00:59:53.407831 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-03 00:59:53.407838 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-03 00:59:53.407844 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-03 00:59:53.407851 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-03 00:59:53.407858 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-03 00:59:53.407864 | orchestrator | 2026-01-03 00:59:53.407870 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-01-03 00:59:53.407877 | orchestrator | Saturday 03 January 2026 00:59:29 +0000 (0:00:07.936) 0:00:35.143 ****** 2026-01-03 00:59:53.407883 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-03 00:59:53.407889 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-03 00:59:53.407897 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-03 00:59:53.407907 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-03 00:59:53.407914 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-03 00:59:53.407921 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-03 00:59:53.407928 | orchestrator | 2026-01-03 00:59:53.407935 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-01-03 00:59:53.407942 | orchestrator | Saturday 03 January 2026 00:59:32 +0000 (0:00:02.780) 0:00:37.924 ****** 2026-01-03 00:59:53.407954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-03 00:59:53.407967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-03 00:59:53.407975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-03 00:59:53.407982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-03 00:59:53.407992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-03 00:59:53.408006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-03 00:59:53.408016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:59:53.408024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:59:53.408031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-03 00:59:53.408038 | orchestrator | 2026-01-03 00:59:53.408044 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-03 00:59:53.408051 | orchestrator | Saturday 03 January 2026 00:59:34 +0000 (0:00:02.223) 0:00:40.147 ****** 2026-01-03 00:59:53.408057 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:53.408064 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:53.408071 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:53.408077 | orchestrator | 2026-01-03 00:59:53.408084 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-01-03 00:59:53.408091 | orchestrator | Saturday 03 January 2026 00:59:34 +0000 (0:00:00.269) 0:00:40.417 ****** 2026-01-03 00:59:53.408097 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:59:53.408104 | orchestrator | 2026-01-03 00:59:53.408111 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-01-03 00:59:53.408117 | orchestrator | Saturday 03 January 2026 00:59:36 +0000 (0:00:01.930) 0:00:42.347 ****** 2026-01-03 00:59:53.408123 | orchestrator | changed: [testbed-node-0] 2026-01-03 00:59:53.408130 | orchestrator | 2026-01-03 00:59:53.408137 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-01-03 00:59:53.408143 | orchestrator | Saturday 03 January 2026 00:59:38 +0000 (0:00:02.070) 0:00:44.417 ****** 2026-01-03 00:59:53.408150 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:59:53.408157 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:59:53.408168 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:59:53.408174 | orchestrator | 2026-01-03 00:59:53.408181 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-01-03 00:59:53.408188 | orchestrator | Saturday 03 January 2026 00:59:39 +0000 (0:00:01.118) 0:00:45.536 ****** 2026-01-03 00:59:53.408197 | orchestrator | ok: [testbed-node-0] 2026-01-03 00:59:53.408204 | orchestrator | ok: [testbed-node-1] 2026-01-03 00:59:53.408211 | orchestrator | ok: [testbed-node-2] 2026-01-03 00:59:53.408217 | orchestrator | 2026-01-03 00:59:53.408224 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-01-03 00:59:53.408231 | orchestrator | Saturday 03 January 2026 00:59:40 +0000 (0:00:00.305) 0:00:45.842 ****** 2026-01-03 00:59:53.408238 | orchestrator | skipping: [testbed-node-0] 2026-01-03 00:59:53.408244 | orchestrator | skipping: [testbed-node-1] 2026-01-03 00:59:53.408251 | orchestrator | skipping: [testbed-node-2] 2026-01-03 00:59:53.408258 | orchestrator | 2026-01-03 00:59:53.408264 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-01-03 00:59:53.408271 | orchestrator | Saturday 03 January 2026 00:59:40 +0000 (0:00:00.360) 0:00:46.202 ****** 2026-01-03 00:59:53.408386 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "Container exited with non-zero return code 1", "rc": 1, "stderr": "+ sudo -E kolla_set_configs\nINFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json\nINFO:__main__:Validating config file\nINFO:__main__:Kolla config strategy set to: COPY_ALWAYS\nINFO:__main__:Copying service configuration files\nINFO:__main__:Copying /var/lib/kolla/config_files/keystone-startup.sh to /usr/bin/keystone-startup.sh\nINFO:__main__:Setting permission for /usr/bin/keystone-startup.sh\nINFO:__main__:Copying /var/lib/kolla/config_files/keystone.conf to /etc/keystone/keystone.conf\nINFO:__main__:Setting permission for /etc/keystone/keystone.conf\nINFO:__main__:Copying /var/lib/kolla/config_files/wsgi-keystone.conf to /etc/apache2/conf-enabled/wsgi-keystone.conf\nINFO:__main__:Setting permission for /etc/apache2/conf-enabled/wsgi-keystone.conf\nINFO:__main__:Writing out command to execute\nINFO:__main__:Setting permission for /var/log/kolla\nINFO:__main__:Setting permission for /etc/keystone/fernet-keys\n++ cat /run_command\n+ CMD=/usr/bin/keystone-startup.sh\n+ ARGS=\n+ sudo kolla_copy_cacerts\nrehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL\n+ sudo kolla_install_projects\n+ [[ ! -n '' ]]\n+ . kolla_extend_start\n++ KEYSTONE_LOG_DIR=/var/log/kolla/keystone\n++ [[ ! -d /var/log/kolla/keystone ]]\n++ mkdir -p /var/log/kolla/keystone\n+++ stat -c %U:%G /var/log/kolla/keystone\n++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\o\\l\\l\\a ]]\n++ chown keystone:kolla /var/log/kolla/keystone\n++ '[' '!' -f /var/log/kolla/keystone/keystone.log ']'\n++ touch /var/log/kolla/keystone/keystone.log\n+++ stat -c %U:%G /var/log/kolla/keystone/keystone.log\n++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\e\\y\\s\\t\\o\\n\\e ]]\n++ chown keystone:keystone /var/log/kolla/keystone/keystone.log\n+++ stat -c %a /var/log/kolla/keystone\n++ [[ 2755 != \\7\\5\\5 ]]\n++ chmod 755 /var/log/kolla/keystone\n++ EXTRA_KEYSTONE_MANAGE_ARGS=\n++ [[ -n '' ]]\n++ [[ -n '' ]]\n++ [[ -n 0 ]]\n++ sudo -H -u keystone keystone-manage db_sync\n2026-01-03 00:59:50.293 1079 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py:342\n2026-01-03 00:59:50.296 1079 CRITICAL keystone [-] Unhandled error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")\n(Background on this error at: https://sqlalche.me/e/20/e3q8)\n2026-01-03 00:59:50.296 1079 ERROR keystone Traceback (most recent call last):\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__\n2026-01-03 00:59:50.296 1079 ERROR keystone self._dbapi_connection = engine.raw_connection()\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3302, in raw_connection\n2026-01-03 00:59:50.296 1079 ERROR keystone return self.pool.connect()\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect\n2026-01-03 00:59:50.296 1079 ERROR keystone return _ConnectionFairy._checkout(self)\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout\n2026-01-03 00:59:50.296 1079 ERROR keystone fairy = _ConnectionRecord.checkout(pool)\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout\n2026-01-03 00:59:50.296 1079 ERROR keystone rec = pool._do_get()\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get\n2026-01-03 00:59:50.296 1079 ERROR keystone with util.safe_reraise():\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__\n2026-01-03 00:59:50.296 1079 ERROR keystone raise exc_value.with_traceback(exc_tb)\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get\n2026-01-03 00:59:50.296 1079 ERROR keystone return self._create_connection()\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection\n2026-01-03 00:59:50.296 1079 ERROR keystone return _ConnectionRecord(self)\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__\n2026-01-03 00:59:50.296 1079 ERROR keystone self.__connect()\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect\n2026-01-03 00:59:50.296 1079 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run\n2026-01-03 00:59:50.296 1079 ERROR keystone self(*args, **kw)\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__\n2026-01-03 00:59:50.296 1079 ERROR keystone fn(*args, **kw)\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1912, in go\n2026-01-03 00:59:50.296 1079 ERROR keystone return once_fn(*arg, **kw)\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 749, in first_connect\n2026-01-03 00:59:50.296 1079 ERROR keystone dialect.initialize(c)\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2835, in initialize\n2026-01-03 00:59:50.296 1079 ERROR keystone default.DefaultDialect.initialize(self, connection)\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 532, in initialize\n2026-01-03 00:59:50.296 1079 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 583, in get_default_isolation_level\n2026-01-03 00:59:50.296 1079 ERROR keystone return self.get_isolation_level(dbapi_conn)\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2540, in get_isolation_level\n2026-01-03 00:59:50.296 1079 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute\n2026-01-03 00:59:50.296 1079 ERROR keystone result = self._query(query)\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query\n2026-01-03 00:59:50.296 1079 ERROR keystone conn.query(q)\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query\n2026-01-03 00:59:50.296 1079 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result\n2026-01-03 00:59:50.296 1079 ERROR keystone result.read()\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read\n2026-01-03 00:59:50.296 1079 ERROR keystone first_packet = self.connection._read_packet()\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet\n2026-01-03 00:59:50.296 1079 ERROR keystone packet.raise_for_error()\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error\n2026-01-03 00:59:50.296 1079 ERROR keystone err.raise_mysql_exception(self._data)\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception\n2026-01-03 00:59:50.296 1079 ERROR keystone raise errorclass(errno, errval)\n2026-01-03 00:59:50.296 1079 ERROR keystone pymysql.err.OperationalError: (1193, \"Unknown system variable 'transaction_isolation'\")\n2026-01-03 00:59:50.296 1079 ERROR keystone \n2026-01-03 00:59:50.296 1079 ERROR keystone The above exception was the direct cause of the following exception:\n2026-01-03 00:59:50.296 1079 ERROR keystone \n2026-01-03 00:59:50.296 1079 ERROR keystone Traceback (most recent call last):\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/bin/keystone-manage\", line 7, in \n2026-01-03 00:59:50.296 1079 ERROR keystone sys.exit(main())\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/manage.py\", line 36, in main\n2026-01-03 00:59:50.296 1079 ERROR keystone cli.main(argv=sys.argv, developer_config_file=developer_config)\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 1733, in main\n2026-01-03 00:59:50.296 1079 ERROR keystone CONF.command.cmd_class.main()\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 493, in main\n2026-01-03 00:59:50.296 1079 ERROR keystone upgrades.offline_sync_database_to_version(CONF.command.version)\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 328, in offline_sync_database_to_version\n2026-01-03 00:59:50.296 1079 ERROR keystone _db_sync(engine=engine)\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 217, in _db_sync\n2026-01-03 00:59:50.296 1079 ERROR keystone with sql.session_for_write() as session:\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__\n2026-01-03 00:59:50.296 1079 ERROR keystone return next(self.gen)\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 1042, in _transaction_scope\n2026-01-03 00:59:50.296 1079 ERROR keystone with current._produce_block(\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__\n2026-01-03 00:59:50.296 1079 ERROR keystone return next(self.gen)\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 641, in _session\n2026-01-03 00:59:50.296 1079 ERROR keystone self.session = self.factory._create_session(\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 404, in _create_session\n2026-01-03 00:59:50.296 1079 ERROR keystone self._start()\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 493, in _start\n2026-01-03 00:59:50.296 1079 ERROR keystone self._setup_for_connection(\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 519, in _setup_for_connection\n2026-01-03 00:59:50.296 1079 ERROR keystone engine = engines.create_engine(\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/debtcollector/renames.py\", line 41, in decorator\n2026-01-03 00:59:50.296 1079 ERROR keystone return wrapped(*args, **kwargs)\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 218, in create_engine\n2026-01-03 00:59:50.296 1079 ERROR keystone test_conn = _test_connection(engine, max_retries, retry_interval)\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 411, in _test_connection\n2026-01-03 00:59:50.296 1079 ERROR keystone return engine.connect()\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3278, in connect\n2026-01-03 00:59:50.296 1079 ERROR keystone return self._connection_cls(self)\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 148, in __init__\n2026-01-03 00:59:50.296 1079 ERROR keystone Connection._handle_dbapi_exception_noconnection(\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 2439, in _handle_dbapi_exception_noconnection\n2026-01-03 00:59:50.296 1079 ERROR keystone raise newraise.with_traceback(exc_info[2]) from e\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__\n2026-01-03 00:59:50.296 1079 ERROR keystone self._dbapi_connection = engine.raw_connection()\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3302, in raw_connection\n2026-01-03 00:59:50.296 1079 ERROR keystone return self.pool.connect()\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect\n2026-01-03 00:59:50.296 1079 ERROR keystone return _ConnectionFairy._checkout(self)\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout\n2026-01-03 00:59:50.296 1079 ERROR keystone fairy = _ConnectionRecord.checkout(pool)\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout\n2026-01-03 00:59:50.296 1079 ERROR keystone rec = pool._do_get()\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get\n2026-01-03 00:59:50.296 1079 ERROR keystone with util.safe_reraise():\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__\n2026-01-03 00:59:50.296 1079 ERROR keystone raise exc_value.with_traceback(exc_tb)\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get\n2026-01-03 00:59:50.296 1079 ERROR keystone return self._create_connection()\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection\n2026-01-03 00:59:50.296 1079 ERROR keystone return _ConnectionRecord(self)\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__\n2026-01-03 00:59:50.296 1079 ERROR keystone self.__connect()\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect\n2026-01-03 00:59:50.296 1079 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run\n2026-01-03 00:59:50.296 1079 ERROR keystone self(*args, **kw)\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__\n2026-01-03 00:59:50.296 1079 ERROR keystone fn(*args, **kw)\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1912, in go\n2026-01-03 00:59:50.296 1079 ERROR keystone return once_fn(*arg, **kw)\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 749, in first_connect\n2026-01-03 00:59:50.296 1079 ERROR keystone dialect.initialize(c)\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2835, in initialize\n2026-01-03 00:59:50.296 1079 ERROR keystone default.DefaultDialect.initialize(self, connection)\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 532, in initialize\n2026-01-03 00:59:50.296 1079 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 583, in get_default_isolation_level\n2026-01-03 00:59:50.296 1079 ERROR keystone return self.get_isolation_level(dbapi_conn)\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2540, in get_isolation_level\n2026-01-03 00:59:50.296 1079 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute\n2026-01-03 00:59:50.296 1079 ERROR keystone result = self._query(query)\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query\n2026-01-03 00:59:50.296 1079 ERROR keystone conn.query(q)\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query\n2026-01-03 00:59:50.296 1079 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result\n2026-01-03 00:59:50.296 1079 ERROR keystone result.read()\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read\n2026-01-03 00:59:50.296 1079 ERROR keystone first_packet = self.connection._read_packet()\n2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet\n2026-01-03 00:59:50.296 1079 ERROR keystone packet.raise_for_error()\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error\n2026-01-03 00:59:50.296 1079 ERROR keystone err.raise_mysql_exception(self._data)\n2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception\n2026-01-03 00:59:50.296 1079 ERROR keystone raise errorclass(errno, errval)\n2026-01-03 00:59:50.296 1079 ERROR keystone sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")\n2026-01-03 00:59:50.296 1079 ERROR keystone (Background on this error at: https://sqlalche.me/e/20/e3q8)\n2026-01-03 00:59:50.296 1079 ERROR keystone \n", "stderr_lines": ["+ sudo -E kolla_set_configs", "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", "INFO:__main__:Validating config file", "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", "INFO:__main__:Copying service configuration files", "INFO:__main__:Copying /var/lib/kolla/config_files/keystone-startup.sh to /usr/bin/keystone-startup.sh", "INFO:__main__:Setting permission for /usr/bin/keystone-startup.sh", "INFO:__main__:Copying /var/lib/kolla/config_files/keystone.conf to /etc/keystone/keystone.conf", "INFO:__main__:Setting permission for /etc/keystone/keystone.conf", "INFO:__main__:Copying /var/lib/kolla/config_files/wsgi-keystone.conf to /etc/apache2/conf-enabled/wsgi-keystone.conf", "INFO:__main__:Setting permission for /etc/apache2/conf-enabled/wsgi-keystone.conf", "INFO:__main__:Writing out command to execute", "INFO:__main__:Setting permission for /var/log/kolla", "INFO:__main__:Setting permission for /etc/keystone/fernet-keys", "++ cat /run_command", "+ CMD=/usr/bin/keystone-startup.sh", "+ ARGS=", "+ sudo kolla_copy_cacerts", "rehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL", "+ sudo kolla_install_projects", "+ [[ ! -n '' ]]", "+ . kolla_extend_start", "++ KEYSTONE_LOG_DIR=/var/log/kolla/keystone", "++ [[ ! -d /var/log/kolla/keystone ]]", "++ mkdir -p /var/log/kolla/keystone", "+++ stat -c %U:%G /var/log/kolla/keystone", "++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\o\\l\\l\\a ]]", "++ chown keystone:kolla /var/log/kolla/keystone", "++ '[' '!' -f /var/log/kolla/keystone/keystone.log ']'", "++ touch /var/log/kolla/keystone/keystone.log", "+++ stat -c %U:%G /var/log/kolla/keystone/keystone.log", "++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\e\\y\\s\\t\\o\\n\\e ]]", "++ chown keystone:keystone /var/log/kolla/keystone/keystone.log", "+++ stat -c %a /var/log/kolla/keystone", "++ [[ 2755 != \\7\\5\\5 ]]", "++ chmod 755 /var/log/kolla/keystone", "++ EXTRA_KEYSTONE_MANAGE_ARGS=", "++ [[ -n '' ]]", "++ [[ -n '' ]]", "++ [[ -n 0 ]]", "++ sudo -H -u keystone keystone-manage db_sync", "2026-01-03 00:59:50.293 1079 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py:342", "2026-01-03 00:59:50.296 1079 CRITICAL keystone [-] Unhandled error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")", "(Background on this error at: https://sqlalche.me/e/20/e3q8)", "2026-01-03 00:59:50.296 1079 ERROR keystone Traceback (most recent call last):", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__", "2026-01-03 00:59:50.296 1079 ERROR keystone self._dbapi_connection = engine.raw_connection()", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3302, in raw_connection", "2026-01-03 00:59:50.296 1079 ERROR keystone return self.pool.connect()", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect", "2026-01-03 00:59:50.296 1079 ERROR keystone return _ConnectionFairy._checkout(self)", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout", "2026-01-03 00:59:50.296 1079 ERROR keystone fairy = _ConnectionRecord.checkout(pool)", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout", "2026-01-03 00:59:50.296 1079 ERROR keystone rec = pool._do_get()", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get", "2026-01-03 00:59:50.296 1079 ERROR keystone with util.safe_reraise():", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__", "2026-01-03 00:59:50.296 1079 ERROR keystone raise exc_value.with_traceback(exc_tb)", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get", "2026-01-03 00:59:50.296 1079 ERROR keystone return self._create_connection()", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection", "2026-01-03 00:59:50.296 1079 ERROR keystone return _ConnectionRecord(self)", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__", "2026-01-03 00:59:50.296 1079 ERROR keystone self.__connect()", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect", "2026-01-03 00:59:50.296 1079 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run", "2026-01-03 00:59:50.296 1079 ERROR keystone self(*args, **kw)", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__", "2026-01-03 00:59:50.296 1079 ERROR keystone fn(*args, **kw)", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1912, in go", "2026-01-03 00:59:50.296 1079 ERROR keystone return once_fn(*arg, **kw)", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 749, in first_connect", "2026-01-03 00:59:50.296 1079 ERROR keystone dialect.initialize(c)", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2835, in initialize", "2026-01-03 00:59:50.296 1079 ERROR keystone default.DefaultDialect.initialize(self, connection)", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 532, in initialize", "2026-01-03 00:59:50.296 1079 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 583, in get_default_isolation_level", "2026-01-03 00:59:50.296 1079 ERROR keystone return self.get_isolation_level(dbapi_conn)", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2540, in get_isolation_level", "2026-01-03 00:59:50.296 1079 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute", "2026-01-03 00:59:50.296 1079 ERROR keystone result = self._query(query)", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query", "2026-01-03 00:59:50.296 1079 ERROR keystone conn.query(q)", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query", "2026-01-03 00:59:50.296 1079 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result", "2026-01-03 00:59:50.296 1079 ERROR keystone result.read()", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read", "2026-01-03 00:59:50.296 1079 ERROR keystone first_packet = self.connection._read_packet()", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet", "2026-01-03 00:59:50.296 1079 ERROR keystone packet.raise_for_error()", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error", "2026-01-03 00:59:50.296 1079 ERROR keystone err.raise_mysql_exception(self._data)", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception", "2026-01-03 00:59:50.296 1079 ERROR keystone raise errorclass(errno, errval)", "2026-01-03 00:59:50.296 1079 ERROR keystone pymysql.err.OperationalError: (1193, \"Unknown system variable 'transaction_isolation'\")", "2026-01-03 00:59:50.296 1079 ERROR keystone ", "2026-01-03 00:59:50.296 1079 ERROR keystone The above exception was the direct cause of the following exception:", "2026-01-03 00:59:50.296 1079 ERROR keystone ", "2026-01-03 00:59:50.296 1079 ERROR keystone Traceback (most recent call last):", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/bin/keystone-manage\", line 7, in ", "2026-01-03 00:59:50.296 1079 ERROR keystone sys.exit(main())", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/manage.py\", line 36, in main", "2026-01-03 00:59:50.296 1079 ERROR keystone cli.main(argv=sys.argv, developer_config_file=developer_config)", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 1733, in main", "2026-01-03 00:59:50.296 1079 ERROR keystone CONF.command.cmd_class.main()", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 493, in main", "2026-01-03 00:59:50.296 1079 ERROR keystone upgrades.offline_sync_database_to_version(CONF.command.version)", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 328, in offline_sync_database_to_version", "2026-01-03 00:59:50.296 1079 ERROR keystone _db_sync(engine=engine)", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 217, in _db_sync", "2026-01-03 00:59:50.296 1079 ERROR keystone with sql.session_for_write() as session:", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__", "2026-01-03 00:59:50.296 1079 ERROR keystone return next(self.gen)", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 1042, in _transaction_scope", "2026-01-03 00:59:50.296 1079 ERROR keystone with current._produce_block(", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__", "2026-01-03 00:59:50.296 1079 ERROR keystone return next(self.gen)", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 641, in _session", "2026-01-03 00:59:50.296 1079 ERROR keystone self.session = self.factory._create_session(", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 404, in _create_session", "2026-01-03 00:59:50.296 1079 ERROR keystone self._start()", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 493, in _start", "2026-01-03 00:59:50.296 1079 ERROR keystone self._setup_for_connection(", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 519, in _setup_for_connection", "2026-01-03 00:59:50.296 1079 ERROR keystone engine = engines.create_engine(", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/debtcollector/renames.py\", line 41, in decorator", "2026-01-03 00:59:50.296 1079 ERROR keystone return wrapped(*args, **kwargs)", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 218, in create_engine", "2026-01-03 00:59:50.296 1079 ERROR keystone test_conn = _test_connection(engine, max_retries, retry_interval)", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 411, in _test_connection", "2026-01-03 00:59:50.296 1079 ERROR keystone return engine.connect()", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3278, in connect", "2026-01-03 00:59:50.296 1079 ERROR keystone return self._connection_cls(self)", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 148, in __init__", "2026-01-03 00:59:50.296 1079 ERROR keystone Connection._handle_dbapi_exception_noconnection(", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 2439, in _handle_dbapi_exception_noconnection", "2026-01-03 00:59:50.296 1079 ERROR keystone raise newraise.with_traceback(exc_info[2]) from e", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__", "2026-01-03 00:59:50.296 1079 ERROR keystone self._dbapi_connection = engine.raw_connection()", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3302, in raw_connection", "2026-01-03 00:59:50.296 1079 ERROR keystone return self.pool.connect()", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect", "2026-01-03 00:59:50.296 1079 ERROR keystone return _ConnectionFairy._checkout(self)", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout", "2026-01-03 00:59:50.296 1079 ERROR keystone fairy = _ConnectionRecord.checkout(pool)", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout", "2026-01-03 00:59:50.296 1079 ERROR keystone rec = pool._do_get()", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get", "2026-01-03 00:59:50.296 1079 ERROR keystone with util.safe_reraise():", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__", "2026-01-03 00:59:50.296 1079 ERROR keystone raise exc_value.with_traceback(exc_tb)", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get", "2026-01-03 00:59:50.296 1079 ERROR keystone return self._create_connection()", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection", "2026-01-03 00:59:50.296 1079 ERROR keystone return _ConnectionRecord(self)", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__", "2026-01-03 00:59:50.296 1079 ERROR keystone self.__connect()", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect", "2026-01-03 00:59:50.296 1079 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run", "2026-01-03 00:59:50.296 1079 ERROR keystone self(*args, **kw)", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__", "2026-01-03 00:59:50.296 1079 ERROR keystone fn(*args, **kw)", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1912, in go", "2026-01-03 00:59:50.296 1079 ERROR keystone return once_fn(*arg, **kw)", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 749, in first_connect", "2026-01-03 00:59:50.296 1079 ERROR keystone dialect.initialize(c)", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2835, in initialize", "2026-01-03 00:59:50.296 1079 ERROR keystone default.DefaultDialect.initialize(self, connection)", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 532, in initialize", "2026-01-03 00:59:50.296 1079 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 583, in get_default_isolation_level", "2026-01-03 00:59:50.296 1079 ERROR keystone return self.get_isolation_level(dbapi_conn)", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2540, in get_isolation_level", "2026-01-03 00:59:50.296 1079 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute", "2026-01-03 00:59:50.296 1079 ERROR keystone result = self._query(query)", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query", "2026-01-03 00:59:50.296 1079 ERROR keystone conn.query(q)", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query", "2026-01-03 00:59:50.296 1079 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result", "2026-01-03 00:59:50.296 1079 ERROR keystone result.read()", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read", "2026-01-03 00:59:50.296 1079 ERROR keystone first_packet = self.connection._read_packet()", "2026-01-03 00:59:50.296 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet", "2026-01-03 00:59:50.296 1079 ERROR keystone packet.raise_for_error()", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error", "2026-01-03 00:59:50.296 1079 ERROR keystone err.raise_mysql_exception(self._data)", "2026-01-03 00:59:50.296 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception", "2026-01-03 00:59:50.296 1079 ERROR keystone raise errorclass(errno, errval)", "2026-01-03 00:59:50.296 1079 ERROR keystone sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")", "2026-01-03 00:59:50.296 1079 ERROR keystone (Background on this error at: https://sqlalche.me/e/20/e3q8)", "2026-01-03 00:59:50.296 1079 ERROR keystone "], "stdout": "Updating certificates in /etc/ssl/certs...\n1 added, 0 removed; done.\nRunning hooks in /etc/ca-certificates/update.d...\ndone.\n", "stdout_lines": ["Updating certificates in /etc/ssl/certs...", "1 added, 0 removed; done.", "Running hooks in /etc/ca-certificates/update.d...", "done."]} 2026-01-03 00:59:53.408510 | orchestrator | 2026-01-03 00:59:53.408524 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 00:59:53.408542 | orchestrator | testbed-node-0 : ok=21  changed=11  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-01-03 00:59:53.408558 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-03 00:59:53.408566 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-03 00:59:53.408574 | orchestrator | 2026-01-03 00:59:53.408582 | orchestrator | 2026-01-03 00:59:53.408590 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 00:59:53.408603 | orchestrator | Saturday 03 January 2026 00:59:51 +0000 (0:00:10.645) 0:00:56.848 ****** 2026-01-03 00:59:53.408618 | orchestrator | =============================================================================== 2026-01-03 00:59:53.408634 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 10.65s 2026-01-03 00:59:53.408651 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 7.94s 2026-01-03 00:59:53.408667 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.12s 2026-01-03 00:59:53.408683 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.60s 2026-01-03 00:59:53.408698 | orchestrator | keystone : Copying over config.json files for services ------------------ 2.85s 2026-01-03 00:59:53.408713 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.78s 2026-01-03 00:59:53.408729 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.22s 2026-01-03 00:59:53.408742 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.07s 2026-01-03 00:59:53.408755 | orchestrator | keystone : Creating keystone database ----------------------------------- 1.93s 2026-01-03 00:59:53.408765 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.87s 2026-01-03 00:59:53.408773 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.69s 2026-01-03 00:59:53.408780 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.52s 2026-01-03 00:59:53.408788 | orchestrator | keystone : Generate the required cron jobs for the node ----------------- 1.23s 2026-01-03 00:59:53.408798 | orchestrator | keystone : Checking for any running keystone_fernet containers ---------- 1.12s 2026-01-03 00:59:53.408805 | orchestrator | keystone : Checking whether keystone-paste.ini file exists -------------- 0.92s 2026-01-03 00:59:53.408811 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS certificate --- 0.89s 2026-01-03 00:59:53.408818 | orchestrator | keystone : Copying over keystone-paste.ini ------------------------------ 0.88s 2026-01-03 00:59:53.408825 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS key ---- 0.80s 2026-01-03 00:59:53.408832 | orchestrator | keystone : Check if Keystone domain-specific config is supplied --------- 0.75s 2026-01-03 00:59:53.408839 | orchestrator | keystone : Create Keystone domain-specific config directory ------------- 0.69s 2026-01-03 00:59:53.408846 | orchestrator | 2026-01-03 00:59:53 | INFO  | Task 2a6908b0-61c5-467d-b829-46ae73da501f is in state STARTED 2026-01-03 00:59:53.408853 | orchestrator | 2026-01-03 00:59:53 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 00:59:53.408860 | orchestrator | 2026-01-03 00:59:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:56.447479 | orchestrator | 2026-01-03 00:59:56 | INFO  | Task e95568eb-68fb-45a2-a357-06ac84d9340c is in state STARTED 2026-01-03 00:59:56.448272 | orchestrator | 2026-01-03 00:59:56 | INFO  | Task d4e2be00-6087-4eff-a178-bfcec6b52ff3 is in state STARTED 2026-01-03 00:59:56.451733 | orchestrator | 2026-01-03 00:59:56 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 00:59:56.452574 | orchestrator | 2026-01-03 00:59:56 | INFO  | Task 2a6908b0-61c5-467d-b829-46ae73da501f is in state STARTED 2026-01-03 00:59:56.454791 | orchestrator | 2026-01-03 00:59:56 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 00:59:56.454858 | orchestrator | 2026-01-03 00:59:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 00:59:59.495942 | orchestrator | 2026-01-03 00:59:59 | INFO  | Task e95568eb-68fb-45a2-a357-06ac84d9340c is in state STARTED 2026-01-03 00:59:59.496655 | orchestrator | 2026-01-03 00:59:59 | INFO  | Task d4e2be00-6087-4eff-a178-bfcec6b52ff3 is in state STARTED 2026-01-03 00:59:59.497654 | orchestrator | 2026-01-03 00:59:59 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 00:59:59.500660 | orchestrator | 2026-01-03 00:59:59 | INFO  | Task 2a6908b0-61c5-467d-b829-46ae73da501f is in state STARTED 2026-01-03 00:59:59.502126 | orchestrator | 2026-01-03 00:59:59 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 00:59:59.502466 | orchestrator | 2026-01-03 00:59:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:02.564039 | orchestrator | 2026-01-03 01:00:02 | INFO  | Task e95568eb-68fb-45a2-a357-06ac84d9340c is in state STARTED 2026-01-03 01:00:02.566073 | orchestrator | 2026-01-03 01:00:02 | INFO  | Task d4e2be00-6087-4eff-a178-bfcec6b52ff3 is in state STARTED 2026-01-03 01:00:02.567658 | orchestrator | 2026-01-03 01:00:02 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 01:00:02.569180 | orchestrator | 2026-01-03 01:00:02 | INFO  | Task 2a6908b0-61c5-467d-b829-46ae73da501f is in state STARTED 2026-01-03 01:00:02.571641 | orchestrator | 2026-01-03 01:00:02 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 01:00:02.571697 | orchestrator | 2026-01-03 01:00:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:05.634553 | orchestrator | 2026-01-03 01:00:05 | INFO  | Task e95568eb-68fb-45a2-a357-06ac84d9340c is in state STARTED 2026-01-03 01:00:05.637057 | orchestrator | 2026-01-03 01:00:05 | INFO  | Task d4e2be00-6087-4eff-a178-bfcec6b52ff3 is in state STARTED 2026-01-03 01:00:05.639267 | orchestrator | 2026-01-03 01:00:05 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 01:00:05.641340 | orchestrator | 2026-01-03 01:00:05 | INFO  | Task 2a6908b0-61c5-467d-b829-46ae73da501f is in state STARTED 2026-01-03 01:00:05.643163 | orchestrator | 2026-01-03 01:00:05 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 01:00:05.643209 | orchestrator | 2026-01-03 01:00:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:08.701196 | orchestrator | 2026-01-03 01:00:08 | INFO  | Task e95568eb-68fb-45a2-a357-06ac84d9340c is in state STARTED 2026-01-03 01:00:08.703146 | orchestrator | 2026-01-03 01:00:08 | INFO  | Task d4e2be00-6087-4eff-a178-bfcec6b52ff3 is in state STARTED 2026-01-03 01:00:08.705433 | orchestrator | 2026-01-03 01:00:08 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 01:00:08.707391 | orchestrator | 2026-01-03 01:00:08 | INFO  | Task 2a6908b0-61c5-467d-b829-46ae73da501f is in state STARTED 2026-01-03 01:00:08.709437 | orchestrator | 2026-01-03 01:00:08 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 01:00:08.709490 | orchestrator | 2026-01-03 01:00:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:11.755188 | orchestrator | 2026-01-03 01:00:11 | INFO  | Task e95568eb-68fb-45a2-a357-06ac84d9340c is in state STARTED 2026-01-03 01:00:11.757609 | orchestrator | 2026-01-03 01:00:11 | INFO  | Task d4e2be00-6087-4eff-a178-bfcec6b52ff3 is in state STARTED 2026-01-03 01:00:11.759410 | orchestrator | 2026-01-03 01:00:11 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 01:00:11.760674 | orchestrator | 2026-01-03 01:00:11 | INFO  | Task 2a6908b0-61c5-467d-b829-46ae73da501f is in state STARTED 2026-01-03 01:00:11.761524 | orchestrator | 2026-01-03 01:00:11 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 01:00:11.761538 | orchestrator | 2026-01-03 01:00:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:14.806578 | orchestrator | 2026-01-03 01:00:14 | INFO  | Task e95568eb-68fb-45a2-a357-06ac84d9340c is in state STARTED 2026-01-03 01:00:14.807901 | orchestrator | 2026-01-03 01:00:14 | INFO  | Task d4e2be00-6087-4eff-a178-bfcec6b52ff3 is in state STARTED 2026-01-03 01:00:14.809421 | orchestrator | 2026-01-03 01:00:14 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 01:00:14.810572 | orchestrator | 2026-01-03 01:00:14 | INFO  | Task 2a6908b0-61c5-467d-b829-46ae73da501f is in state STARTED 2026-01-03 01:00:14.812018 | orchestrator | 2026-01-03 01:00:14 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 01:00:14.812062 | orchestrator | 2026-01-03 01:00:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:17.866586 | orchestrator | 2026-01-03 01:00:17 | INFO  | Task e95568eb-68fb-45a2-a357-06ac84d9340c is in state STARTED 2026-01-03 01:00:17.868617 | orchestrator | 2026-01-03 01:00:17 | INFO  | Task d4e2be00-6087-4eff-a178-bfcec6b52ff3 is in state STARTED 2026-01-03 01:00:17.877110 | orchestrator | 2026-01-03 01:00:17 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state STARTED 2026-01-03 01:00:17.879131 | orchestrator | 2026-01-03 01:00:17 | INFO  | Task 2a6908b0-61c5-467d-b829-46ae73da501f is in state STARTED 2026-01-03 01:00:17.880716 | orchestrator | 2026-01-03 01:00:17 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 01:00:17.880759 | orchestrator | 2026-01-03 01:00:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:20.929333 | orchestrator | 2026-01-03 01:00:20 | INFO  | Task fc0e7eca-0855-4742-8fdf-4d7543aaf07c is in state STARTED 2026-01-03 01:00:20.932108 | orchestrator | 2026-01-03 01:00:20 | INFO  | Task e95568eb-68fb-45a2-a357-06ac84d9340c is in state STARTED 2026-01-03 01:00:20.935077 | orchestrator | 2026-01-03 01:00:20 | INFO  | Task d4e2be00-6087-4eff-a178-bfcec6b52ff3 is in state STARTED 2026-01-03 01:00:20.937135 | orchestrator | 2026-01-03 01:00:20 | INFO  | Task 8f04ac9a-1a0a-4e0e-9e0e-53560cb762ac is in state SUCCESS 2026-01-03 01:00:20.939240 | orchestrator | 2026-01-03 01:00:20.939285 | orchestrator | 2026-01-03 01:00:20.939291 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 01:00:20.939296 | orchestrator | 2026-01-03 01:00:20.939300 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 01:00:20.939304 | orchestrator | Saturday 03 January 2026 00:58:54 +0000 (0:00:00.253) 0:00:00.253 ****** 2026-01-03 01:00:20.939308 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:00:20.939313 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:00:20.939316 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:00:20.939320 | orchestrator | 2026-01-03 01:00:20.939324 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 01:00:20.939328 | orchestrator | Saturday 03 January 2026 00:58:54 +0000 (0:00:00.300) 0:00:00.553 ****** 2026-01-03 01:00:20.939332 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-01-03 01:00:20.939337 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-01-03 01:00:20.939340 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-01-03 01:00:20.939344 | orchestrator | 2026-01-03 01:00:20.939348 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-01-03 01:00:20.939352 | orchestrator | 2026-01-03 01:00:20.939356 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-03 01:00:20.939359 | orchestrator | Saturday 03 January 2026 00:58:55 +0000 (0:00:00.352) 0:00:00.906 ****** 2026-01-03 01:00:20.939363 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 01:00:20.939368 | orchestrator | 2026-01-03 01:00:20.939372 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-01-03 01:00:20.939376 | orchestrator | Saturday 03 January 2026 00:58:55 +0000 (0:00:00.448) 0:00:01.355 ****** 2026-01-03 01:00:20.939391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-03 01:00:20.939569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-03 01:00:20.939589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-03 01:00:20.939600 | orchestrator | 2026-01-03 01:00:20.939604 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-01-03 01:00:20.939608 | orchestrator | Saturday 03 January 2026 00:58:56 +0000 (0:00:01.111) 0:00:02.466 ****** 2026-01-03 01:00:20.939612 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:00:20.939616 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:00:20.939620 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:00:20.939624 | orchestrator | 2026-01-03 01:00:20.939628 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-03 01:00:20.939632 | orchestrator | Saturday 03 January 2026 00:58:56 +0000 (0:00:00.363) 0:00:02.829 ****** 2026-01-03 01:00:20.939636 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-03 01:00:20.939644 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-03 01:00:20.939648 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-01-03 01:00:20.939652 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-01-03 01:00:20.939656 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-01-03 01:00:20.939660 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-01-03 01:00:20.939664 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-01-03 01:00:20.939667 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-01-03 01:00:20.939671 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-03 01:00:20.939675 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-03 01:00:20.939679 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-01-03 01:00:20.939683 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-01-03 01:00:20.939687 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-01-03 01:00:20.939690 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-01-03 01:00:20.939694 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-01-03 01:00:20.939698 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-01-03 01:00:20.939702 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-03 01:00:20.939706 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-03 01:00:20.939710 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-01-03 01:00:20.939714 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-01-03 01:00:20.939717 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-01-03 01:00:20.939721 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-01-03 01:00:20.939725 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-01-03 01:00:20.939733 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-01-03 01:00:20.939737 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-01-03 01:00:20.939742 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-01-03 01:00:20.939746 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-01-03 01:00:20.939750 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-01-03 01:00:20.939756 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-01-03 01:00:20.939760 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-01-03 01:00:20.939764 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-01-03 01:00:20.939768 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-01-03 01:00:20.939771 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-01-03 01:00:20.939776 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-01-03 01:00:20.939780 | orchestrator | 2026-01-03 01:00:20.939784 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-03 01:00:20.939787 | orchestrator | Saturday 03 January 2026 00:58:57 +0000 (0:00:00.660) 0:00:03.489 ****** 2026-01-03 01:00:20.939791 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:00:20.939795 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:00:20.939799 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:00:20.939803 | orchestrator | 2026-01-03 01:00:20.939807 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-03 01:00:20.939811 | orchestrator | Saturday 03 January 2026 00:58:57 +0000 (0:00:00.254) 0:00:03.744 ****** 2026-01-03 01:00:20.939815 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.939819 | orchestrator | 2026-01-03 01:00:20.939825 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-03 01:00:20.939829 | orchestrator | Saturday 03 January 2026 00:58:57 +0000 (0:00:00.103) 0:00:03.847 ****** 2026-01-03 01:00:20.939833 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.939837 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:00:20.939841 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:00:20.939844 | orchestrator | 2026-01-03 01:00:20.939848 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-03 01:00:20.939852 | orchestrator | Saturday 03 January 2026 00:58:58 +0000 (0:00:00.366) 0:00:04.214 ****** 2026-01-03 01:00:20.939856 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:00:20.939860 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:00:20.939864 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:00:20.939868 | orchestrator | 2026-01-03 01:00:20.939871 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-03 01:00:20.939875 | orchestrator | Saturday 03 January 2026 00:58:58 +0000 (0:00:00.267) 0:00:04.482 ****** 2026-01-03 01:00:20.939879 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.939883 | orchestrator | 2026-01-03 01:00:20.939889 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-03 01:00:20.939893 | orchestrator | Saturday 03 January 2026 00:58:58 +0000 (0:00:00.132) 0:00:04.614 ****** 2026-01-03 01:00:20.939897 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.939901 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:00:20.939905 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:00:20.939908 | orchestrator | 2026-01-03 01:00:20.939912 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-03 01:00:20.939916 | orchestrator | Saturday 03 January 2026 00:58:59 +0000 (0:00:00.263) 0:00:04.878 ****** 2026-01-03 01:00:20.939920 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:00:20.939924 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:00:20.939928 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:00:20.939932 | orchestrator | 2026-01-03 01:00:20.939935 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-03 01:00:20.939939 | orchestrator | Saturday 03 January 2026 00:58:59 +0000 (0:00:00.313) 0:00:05.192 ****** 2026-01-03 01:00:20.939943 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.939947 | orchestrator | 2026-01-03 01:00:20.939951 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-03 01:00:20.939955 | orchestrator | Saturday 03 January 2026 00:58:59 +0000 (0:00:00.125) 0:00:05.317 ****** 2026-01-03 01:00:20.939962 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.939968 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:00:20.939974 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:00:20.939980 | orchestrator | 2026-01-03 01:00:20.939985 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-03 01:00:20.939991 | orchestrator | Saturday 03 January 2026 00:58:59 +0000 (0:00:00.530) 0:00:05.847 ****** 2026-01-03 01:00:20.939997 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:00:20.940003 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:00:20.940008 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:00:20.940015 | orchestrator | 2026-01-03 01:00:20.940021 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-03 01:00:20.940027 | orchestrator | Saturday 03 January 2026 00:59:00 +0000 (0:00:00.318) 0:00:06.166 ****** 2026-01-03 01:00:20.940033 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.940039 | orchestrator | 2026-01-03 01:00:20.940045 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-03 01:00:20.940051 | orchestrator | Saturday 03 January 2026 00:59:00 +0000 (0:00:00.143) 0:00:06.310 ****** 2026-01-03 01:00:20.940058 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.940064 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:00:20.940070 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:00:20.940075 | orchestrator | 2026-01-03 01:00:20.940081 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-03 01:00:20.940087 | orchestrator | Saturday 03 January 2026 00:59:00 +0000 (0:00:00.275) 0:00:06.585 ****** 2026-01-03 01:00:20.940096 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:00:20.940102 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:00:20.940108 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:00:20.940115 | orchestrator | 2026-01-03 01:00:20.940121 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-03 01:00:20.940127 | orchestrator | Saturday 03 January 2026 00:59:01 +0000 (0:00:00.493) 0:00:07.078 ****** 2026-01-03 01:00:20.940133 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.940139 | orchestrator | 2026-01-03 01:00:20.940145 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-03 01:00:20.940152 | orchestrator | Saturday 03 January 2026 00:59:01 +0000 (0:00:00.137) 0:00:07.216 ****** 2026-01-03 01:00:20.940157 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.940163 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:00:20.940169 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:00:20.940175 | orchestrator | 2026-01-03 01:00:20.940181 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-03 01:00:20.940192 | orchestrator | Saturday 03 January 2026 00:59:01 +0000 (0:00:00.299) 0:00:07.516 ****** 2026-01-03 01:00:20.940198 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:00:20.940205 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:00:20.940211 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:00:20.940217 | orchestrator | 2026-01-03 01:00:20.940224 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-03 01:00:20.940230 | orchestrator | Saturday 03 January 2026 00:59:01 +0000 (0:00:00.311) 0:00:07.827 ****** 2026-01-03 01:00:20.940236 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.940243 | orchestrator | 2026-01-03 01:00:20.940249 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-03 01:00:20.940255 | orchestrator | Saturday 03 January 2026 00:59:02 +0000 (0:00:00.128) 0:00:07.956 ****** 2026-01-03 01:00:20.940262 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.940268 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:00:20.940274 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:00:20.940280 | orchestrator | 2026-01-03 01:00:20.940286 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-03 01:00:20.940296 | orchestrator | Saturday 03 January 2026 00:59:02 +0000 (0:00:00.279) 0:00:08.236 ****** 2026-01-03 01:00:20.940303 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:00:20.940308 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:00:20.940315 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:00:20.940320 | orchestrator | 2026-01-03 01:00:20.940326 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-03 01:00:20.940333 | orchestrator | Saturday 03 January 2026 00:59:02 +0000 (0:00:00.532) 0:00:08.768 ****** 2026-01-03 01:00:20.940339 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.940345 | orchestrator | 2026-01-03 01:00:20.940352 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-03 01:00:20.940358 | orchestrator | Saturday 03 January 2026 00:59:03 +0000 (0:00:00.132) 0:00:08.900 ****** 2026-01-03 01:00:20.940364 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.940370 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:00:20.940376 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:00:20.940383 | orchestrator | 2026-01-03 01:00:20.940389 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-03 01:00:20.940405 | orchestrator | Saturday 03 January 2026 00:59:03 +0000 (0:00:00.331) 0:00:09.232 ****** 2026-01-03 01:00:20.940414 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:00:20.940419 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:00:20.940425 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:00:20.940431 | orchestrator | 2026-01-03 01:00:20.940437 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-03 01:00:20.940443 | orchestrator | Saturday 03 January 2026 00:59:03 +0000 (0:00:00.304) 0:00:09.536 ****** 2026-01-03 01:00:20.940449 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.940455 | orchestrator | 2026-01-03 01:00:20.940461 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-03 01:00:20.940467 | orchestrator | Saturday 03 January 2026 00:59:03 +0000 (0:00:00.120) 0:00:09.657 ****** 2026-01-03 01:00:20.940473 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.940479 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:00:20.940485 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:00:20.940491 | orchestrator | 2026-01-03 01:00:20.940510 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-03 01:00:20.940517 | orchestrator | Saturday 03 January 2026 00:59:04 +0000 (0:00:00.315) 0:00:09.972 ****** 2026-01-03 01:00:20.940523 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:00:20.940530 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:00:20.940536 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:00:20.940542 | orchestrator | 2026-01-03 01:00:20.940633 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-03 01:00:20.940649 | orchestrator | Saturday 03 January 2026 00:59:04 +0000 (0:00:00.531) 0:00:10.504 ****** 2026-01-03 01:00:20.940655 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.940662 | orchestrator | 2026-01-03 01:00:20.940668 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-03 01:00:20.940674 | orchestrator | Saturday 03 January 2026 00:59:04 +0000 (0:00:00.130) 0:00:10.635 ****** 2026-01-03 01:00:20.940680 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.940686 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:00:20.940692 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:00:20.940699 | orchestrator | 2026-01-03 01:00:20.940705 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-03 01:00:20.940712 | orchestrator | Saturday 03 January 2026 00:59:05 +0000 (0:00:00.296) 0:00:10.931 ****** 2026-01-03 01:00:20.940718 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:00:20.940725 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:00:20.940731 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:00:20.940737 | orchestrator | 2026-01-03 01:00:20.940743 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-03 01:00:20.940747 | orchestrator | Saturday 03 January 2026 00:59:05 +0000 (0:00:00.313) 0:00:11.245 ****** 2026-01-03 01:00:20.940751 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.940755 | orchestrator | 2026-01-03 01:00:20.940759 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-03 01:00:20.940767 | orchestrator | Saturday 03 January 2026 00:59:05 +0000 (0:00:00.130) 0:00:11.376 ****** 2026-01-03 01:00:20.940771 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.940775 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:00:20.940778 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:00:20.940782 | orchestrator | 2026-01-03 01:00:20.940786 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-01-03 01:00:20.940790 | orchestrator | Saturday 03 January 2026 00:59:06 +0000 (0:00:00.505) 0:00:11.881 ****** 2026-01-03 01:00:20.940794 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:00:20.940798 | orchestrator | changed: [testbed-node-1] 2026-01-03 01:00:20.940801 | orchestrator | changed: [testbed-node-2] 2026-01-03 01:00:20.940805 | orchestrator | 2026-01-03 01:00:20.940809 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-01-03 01:00:20.940813 | orchestrator | Saturday 03 January 2026 00:59:07 +0000 (0:00:01.561) 0:00:13.443 ****** 2026-01-03 01:00:20.940817 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-03 01:00:20.940821 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-03 01:00:20.940825 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-03 01:00:20.940828 | orchestrator | 2026-01-03 01:00:20.940832 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-01-03 01:00:20.940836 | orchestrator | Saturday 03 January 2026 00:59:09 +0000 (0:00:01.739) 0:00:15.182 ****** 2026-01-03 01:00:20.940840 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-03 01:00:20.940844 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-03 01:00:20.940848 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-03 01:00:20.940852 | orchestrator | 2026-01-03 01:00:20.940856 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-01-03 01:00:20.940866 | orchestrator | Saturday 03 January 2026 00:59:11 +0000 (0:00:02.129) 0:00:17.311 ****** 2026-01-03 01:00:20.940870 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-03 01:00:20.940873 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-03 01:00:20.940877 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-03 01:00:20.940885 | orchestrator | 2026-01-03 01:00:20.940889 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-01-03 01:00:20.940893 | orchestrator | Saturday 03 January 2026 00:59:13 +0000 (0:00:02.043) 0:00:19.355 ****** 2026-01-03 01:00:20.940900 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.940906 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:00:20.940911 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:00:20.940917 | orchestrator | 2026-01-03 01:00:20.940923 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-01-03 01:00:20.940930 | orchestrator | Saturday 03 January 2026 00:59:13 +0000 (0:00:00.330) 0:00:19.686 ****** 2026-01-03 01:00:20.940936 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.940942 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:00:20.940948 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:00:20.940956 | orchestrator | 2026-01-03 01:00:20.940964 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-03 01:00:20.940971 | orchestrator | Saturday 03 January 2026 00:59:14 +0000 (0:00:00.354) 0:00:20.040 ****** 2026-01-03 01:00:20.940977 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 01:00:20.940983 | orchestrator | 2026-01-03 01:00:20.940989 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-01-03 01:00:20.940995 | orchestrator | Saturday 03 January 2026 00:59:15 +0000 (0:00:00.930) 0:00:20.970 ****** 2026-01-03 01:00:20.941008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-03 01:00:20.941024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-03 01:00:20.941046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-03 01:00:20.941054 | orchestrator | 2026-01-03 01:00:20.941058 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-01-03 01:00:20.941062 | orchestrator | Saturday 03 January 2026 00:59:16 +0000 (0:00:01.683) 0:00:22.653 ****** 2026-01-03 01:00:20.941070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-03 01:00:20.941074 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.941083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-03 01:00:20.941091 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:00:20.941095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-03 01:00:20.941099 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:00:20.941103 | orchestrator | 2026-01-03 01:00:20.941107 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-01-03 01:00:20.941111 | orchestrator | Saturday 03 January 2026 00:59:17 +0000 (0:00:00.654) 0:00:23.308 ****** 2026-01-03 01:00:20.941120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-03 01:00:20.941127 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.941133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-03 01:00:20.941138 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:00:20.941145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-03 01:00:20.941153 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:00:20.941157 | orchestrator | 2026-01-03 01:00:20.941161 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-01-03 01:00:20.941165 | orchestrator | Saturday 03 January 2026 00:59:18 +0000 (0:00:00.803) 0:00:24.112 ****** 2026-01-03 01:00:20.941171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-03 01:00:20.941182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-03 01:00:20.941189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-03 01:00:20.941196 | orchestrator | 2026-01-03 01:00:20.941200 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-03 01:00:20.941203 | orchestrator | Saturday 03 January 2026 00:59:19 +0000 (0:00:01.568) 0:00:25.680 ****** 2026-01-03 01:00:20.941207 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:00:20.941211 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:00:20.941215 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:00:20.941219 | orchestrator | 2026-01-03 01:00:20.941223 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-03 01:00:20.941227 | orchestrator | Saturday 03 January 2026 00:59:20 +0000 (0:00:00.297) 0:00:25.978 ****** 2026-01-03 01:00:20.941231 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 01:00:20.941234 | orchestrator | 2026-01-03 01:00:20.941238 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-01-03 01:00:20.941244 | orchestrator | Saturday 03 January 2026 00:59:20 +0000 (0:00:00.700) 0:00:26.678 ****** 2026-01-03 01:00:20.941248 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:00:20.941252 | orchestrator | 2026-01-03 01:00:20.941256 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-01-03 01:00:20.941260 | orchestrator | Saturday 03 January 2026 00:59:23 +0000 (0:00:02.653) 0:00:29.332 ****** 2026-01-03 01:00:20.941265 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:00:20.941270 | orchestrator | 2026-01-03 01:00:20.941274 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-01-03 01:00:20.941279 | orchestrator | Saturday 03 January 2026 00:59:26 +0000 (0:00:02.616) 0:00:31.949 ****** 2026-01-03 01:00:20.941284 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:00:20.941288 | orchestrator | 2026-01-03 01:00:20.941293 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-03 01:00:20.941299 | orchestrator | Saturday 03 January 2026 00:59:39 +0000 (0:00:13.894) 0:00:45.843 ****** 2026-01-03 01:00:20.941306 | orchestrator | 2026-01-03 01:00:20.941313 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-03 01:00:20.941319 | orchestrator | Saturday 03 January 2026 00:59:40 +0000 (0:00:00.068) 0:00:45.912 ****** 2026-01-03 01:00:20.941326 | orchestrator | 2026-01-03 01:00:20.941332 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-03 01:00:20.941339 | orchestrator | Saturday 03 January 2026 00:59:40 +0000 (0:00:00.068) 0:00:45.980 ****** 2026-01-03 01:00:20.941346 | orchestrator | 2026-01-03 01:00:20.941353 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-01-03 01:00:20.941359 | orchestrator | Saturday 03 January 2026 00:59:40 +0000 (0:00:00.090) 0:00:46.070 ****** 2026-01-03 01:00:20.941365 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:00:20.941370 | orchestrator | changed: [testbed-node-2] 2026-01-03 01:00:20.941375 | orchestrator | changed: [testbed-node-1] 2026-01-03 01:00:20.941379 | orchestrator | 2026-01-03 01:00:20.941384 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 01:00:20.941389 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-03 01:00:20.941393 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-03 01:00:20.941397 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-03 01:00:20.941401 | orchestrator | 2026-01-03 01:00:20.941408 | orchestrator | 2026-01-03 01:00:20.941412 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 01:00:20.941416 | orchestrator | Saturday 03 January 2026 01:00:20 +0000 (0:00:40.009) 0:01:26.079 ****** 2026-01-03 01:00:20.941419 | orchestrator | =============================================================================== 2026-01-03 01:00:20.941423 | orchestrator | horizon : Restart horizon container ------------------------------------ 40.01s 2026-01-03 01:00:20.941427 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 13.89s 2026-01-03 01:00:20.941431 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.65s 2026-01-03 01:00:20.941435 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.62s 2026-01-03 01:00:20.941438 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.13s 2026-01-03 01:00:20.941442 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.04s 2026-01-03 01:00:20.941446 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.74s 2026-01-03 01:00:20.941450 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.68s 2026-01-03 01:00:20.941455 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.57s 2026-01-03 01:00:20.941461 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.56s 2026-01-03 01:00:20.941466 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.11s 2026-01-03 01:00:20.941472 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.93s 2026-01-03 01:00:20.941478 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.80s 2026-01-03 01:00:20.941483 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.70s 2026-01-03 01:00:20.941490 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.66s 2026-01-03 01:00:20.941510 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.65s 2026-01-03 01:00:20.941517 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2026-01-03 01:00:20.941524 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2026-01-03 01:00:20.941529 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.53s 2026-01-03 01:00:20.941535 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.51s 2026-01-03 01:00:20.941540 | orchestrator | 2026-01-03 01:00:20 | INFO  | Task 2a6908b0-61c5-467d-b829-46ae73da501f is in state SUCCESS 2026-01-03 01:00:20.941547 | orchestrator | 2026-01-03 01:00:20 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 01:00:20.941554 | orchestrator | 2026-01-03 01:00:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:23.996787 | orchestrator | 2026-01-03 01:00:23 | INFO  | Task fc0e7eca-0855-4742-8fdf-4d7543aaf07c is in state STARTED 2026-01-03 01:00:23.997692 | orchestrator | 2026-01-03 01:00:23 | INFO  | Task e95568eb-68fb-45a2-a357-06ac84d9340c is in state STARTED 2026-01-03 01:00:23.998723 | orchestrator | 2026-01-03 01:00:23 | INFO  | Task d4e2be00-6087-4eff-a178-bfcec6b52ff3 is in state STARTED 2026-01-03 01:00:24.003129 | orchestrator | 2026-01-03 01:00:24 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:00:24.008989 | orchestrator | 2026-01-03 01:00:24 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 01:00:24.009060 | orchestrator | 2026-01-03 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:27.046784 | orchestrator | 2026-01-03 01:00:27 | INFO  | Task fc0e7eca-0855-4742-8fdf-4d7543aaf07c is in state STARTED 2026-01-03 01:00:27.048944 | orchestrator | 2026-01-03 01:00:27 | INFO  | Task e95568eb-68fb-45a2-a357-06ac84d9340c is in state STARTED 2026-01-03 01:00:27.051114 | orchestrator | 2026-01-03 01:00:27 | INFO  | Task d4e2be00-6087-4eff-a178-bfcec6b52ff3 is in state STARTED 2026-01-03 01:00:27.052829 | orchestrator | 2026-01-03 01:00:27 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:00:27.054974 | orchestrator | 2026-01-03 01:00:27 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 01:00:27.055040 | orchestrator | 2026-01-03 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:30.094624 | orchestrator | 2026-01-03 01:00:30 | INFO  | Task fc0e7eca-0855-4742-8fdf-4d7543aaf07c is in state STARTED 2026-01-03 01:00:30.098326 | orchestrator | 2026-01-03 01:00:30 | INFO  | Task e95568eb-68fb-45a2-a357-06ac84d9340c is in state STARTED 2026-01-03 01:00:30.101273 | orchestrator | 2026-01-03 01:00:30 | INFO  | Task d4e2be00-6087-4eff-a178-bfcec6b52ff3 is in state STARTED 2026-01-03 01:00:30.103426 | orchestrator | 2026-01-03 01:00:30 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:00:30.103860 | orchestrator | 2026-01-03 01:00:30 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 01:00:30.103890 | orchestrator | 2026-01-03 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:33.139844 | orchestrator | 2026-01-03 01:00:33 | INFO  | Task fc0e7eca-0855-4742-8fdf-4d7543aaf07c is in state STARTED 2026-01-03 01:00:33.141166 | orchestrator | 2026-01-03 01:00:33 | INFO  | Task e95568eb-68fb-45a2-a357-06ac84d9340c is in state STARTED 2026-01-03 01:00:33.142676 | orchestrator | 2026-01-03 01:00:33 | INFO  | Task d4e2be00-6087-4eff-a178-bfcec6b52ff3 is in state STARTED 2026-01-03 01:00:33.144688 | orchestrator | 2026-01-03 01:00:33 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:00:33.146800 | orchestrator | 2026-01-03 01:00:33 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 01:00:33.146839 | orchestrator | 2026-01-03 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:36.190242 | orchestrator | 2026-01-03 01:00:36 | INFO  | Task fc0e7eca-0855-4742-8fdf-4d7543aaf07c is in state STARTED 2026-01-03 01:00:36.192220 | orchestrator | 2026-01-03 01:00:36 | INFO  | Task e95568eb-68fb-45a2-a357-06ac84d9340c is in state STARTED 2026-01-03 01:00:36.194057 | orchestrator | 2026-01-03 01:00:36 | INFO  | Task d4e2be00-6087-4eff-a178-bfcec6b52ff3 is in state STARTED 2026-01-03 01:00:36.195807 | orchestrator | 2026-01-03 01:00:36 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:00:36.197119 | orchestrator | 2026-01-03 01:00:36 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 01:00:36.197146 | orchestrator | 2026-01-03 01:00:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:39.245263 | orchestrator | 2026-01-03 01:00:39 | INFO  | Task fc0e7eca-0855-4742-8fdf-4d7543aaf07c is in state STARTED 2026-01-03 01:00:39.248400 | orchestrator | 2026-01-03 01:00:39 | INFO  | Task e95568eb-68fb-45a2-a357-06ac84d9340c is in state STARTED 2026-01-03 01:00:39.250007 | orchestrator | 2026-01-03 01:00:39 | INFO  | Task d4e2be00-6087-4eff-a178-bfcec6b52ff3 is in state STARTED 2026-01-03 01:00:39.251300 | orchestrator | 2026-01-03 01:00:39 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:00:39.252284 | orchestrator | 2026-01-03 01:00:39 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 01:00:39.252316 | orchestrator | 2026-01-03 01:00:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:42.292491 | orchestrator | 2026-01-03 01:00:42 | INFO  | Task fc0e7eca-0855-4742-8fdf-4d7543aaf07c is in state STARTED 2026-01-03 01:00:42.294349 | orchestrator | 2026-01-03 01:00:42 | INFO  | Task e95568eb-68fb-45a2-a357-06ac84d9340c is in state STARTED 2026-01-03 01:00:42.294865 | orchestrator | 2026-01-03 01:00:42 | INFO  | Task d4e2be00-6087-4eff-a178-bfcec6b52ff3 is in state STARTED 2026-01-03 01:00:42.296336 | orchestrator | 2026-01-03 01:00:42 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:00:42.296819 | orchestrator | 2026-01-03 01:00:42 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 01:00:42.297047 | orchestrator | 2026-01-03 01:00:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:45.338203 | orchestrator | 2026-01-03 01:00:45 | INFO  | Task fc0e7eca-0855-4742-8fdf-4d7543aaf07c is in state STARTED 2026-01-03 01:00:45.340002 | orchestrator | 2026-01-03 01:00:45 | INFO  | Task e95568eb-68fb-45a2-a357-06ac84d9340c is in state STARTED 2026-01-03 01:00:45.342217 | orchestrator | 2026-01-03 01:00:45 | INFO  | Task d4e2be00-6087-4eff-a178-bfcec6b52ff3 is in state STARTED 2026-01-03 01:00:45.343738 | orchestrator | 2026-01-03 01:00:45 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:00:45.345126 | orchestrator | 2026-01-03 01:00:45 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 01:00:45.345167 | orchestrator | 2026-01-03 01:00:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:48.387340 | orchestrator | 2026-01-03 01:00:48 | INFO  | Task fc0e7eca-0855-4742-8fdf-4d7543aaf07c is in state STARTED 2026-01-03 01:00:48.390842 | orchestrator | 2026-01-03 01:00:48 | INFO  | Task e95568eb-68fb-45a2-a357-06ac84d9340c is in state STARTED 2026-01-03 01:00:48.392868 | orchestrator | 2026-01-03 01:00:48 | INFO  | Task d4e2be00-6087-4eff-a178-bfcec6b52ff3 is in state STARTED 2026-01-03 01:00:48.395863 | orchestrator | 2026-01-03 01:00:48 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:00:48.397786 | orchestrator | 2026-01-03 01:00:48 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 01:00:48.397938 | orchestrator | 2026-01-03 01:00:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:51.444111 | orchestrator | 2026-01-03 01:00:51 | INFO  | Task fc0e7eca-0855-4742-8fdf-4d7543aaf07c is in state STARTED 2026-01-03 01:00:51.446404 | orchestrator | 2026-01-03 01:00:51 | INFO  | Task e95568eb-68fb-45a2-a357-06ac84d9340c is in state STARTED 2026-01-03 01:00:51.449011 | orchestrator | 2026-01-03 01:00:51 | INFO  | Task d4e2be00-6087-4eff-a178-bfcec6b52ff3 is in state STARTED 2026-01-03 01:00:51.450757 | orchestrator | 2026-01-03 01:00:51 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:00:51.452688 | orchestrator | 2026-01-03 01:00:51 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 01:00:51.452741 | orchestrator | 2026-01-03 01:00:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:54.495983 | orchestrator | 2026-01-03 01:00:54 | INFO  | Task fc0e7eca-0855-4742-8fdf-4d7543aaf07c is in state STARTED 2026-01-03 01:00:54.497156 | orchestrator | 2026-01-03 01:00:54 | INFO  | Task e95568eb-68fb-45a2-a357-06ac84d9340c is in state STARTED 2026-01-03 01:00:54.500080 | orchestrator | 2026-01-03 01:00:54 | INFO  | Task d4e2be00-6087-4eff-a178-bfcec6b52ff3 is in state STARTED 2026-01-03 01:00:54.502563 | orchestrator | 2026-01-03 01:00:54 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:00:54.503666 | orchestrator | 2026-01-03 01:00:54 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 01:00:54.504492 | orchestrator | 2026-01-03 01:00:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:00:57.554438 | orchestrator | 2026-01-03 01:00:57 | INFO  | Task fc0e7eca-0855-4742-8fdf-4d7543aaf07c is in state STARTED 2026-01-03 01:00:57.556275 | orchestrator | 2026-01-03 01:00:57 | INFO  | Task e95568eb-68fb-45a2-a357-06ac84d9340c is in state STARTED 2026-01-03 01:00:57.558546 | orchestrator | 2026-01-03 01:00:57 | INFO  | Task d4e2be00-6087-4eff-a178-bfcec6b52ff3 is in state STARTED 2026-01-03 01:00:57.560289 | orchestrator | 2026-01-03 01:00:57 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:00:57.562795 | orchestrator | 2026-01-03 01:00:57 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 01:00:57.562834 | orchestrator | 2026-01-03 01:00:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:00.608190 | orchestrator | 2026-01-03 01:01:00 | INFO  | Task fc0e7eca-0855-4742-8fdf-4d7543aaf07c is in state STARTED 2026-01-03 01:01:00.608967 | orchestrator | 2026-01-03 01:01:00 | INFO  | Task e95568eb-68fb-45a2-a357-06ac84d9340c is in state STARTED 2026-01-03 01:01:00.610895 | orchestrator | 2026-01-03 01:01:00 | INFO  | Task d4e2be00-6087-4eff-a178-bfcec6b52ff3 is in state STARTED 2026-01-03 01:01:00.611869 | orchestrator | 2026-01-03 01:01:00 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:01:00.612732 | orchestrator | 2026-01-03 01:01:00 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 01:01:00.612773 | orchestrator | 2026-01-03 01:01:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:03.670416 | orchestrator | 2026-01-03 01:01:03 | INFO  | Task fc0e7eca-0855-4742-8fdf-4d7543aaf07c is in state STARTED 2026-01-03 01:01:03.674397 | orchestrator | 2026-01-03 01:01:03.674478 | orchestrator | 2026-01-03 01:01:03.674488 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-01-03 01:01:03.674497 | orchestrator | 2026-01-03 01:01:03.674503 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-01-03 01:01:03.674510 | orchestrator | Saturday 03 January 2026 00:59:47 +0000 (0:00:00.165) 0:00:00.165 ****** 2026-01-03 01:01:03.674518 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-03 01:01:03.674525 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-03 01:01:03.674531 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-03 01:01:03.674538 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-03 01:01:03.674544 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-03 01:01:03.674551 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-03 01:01:03.674558 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-03 01:01:03.674564 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-03 01:01:03.674571 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-03 01:01:03.674577 | orchestrator | 2026-01-03 01:01:03.674622 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-01-03 01:01:03.674630 | orchestrator | Saturday 03 January 2026 00:59:51 +0000 (0:00:04.018) 0:00:04.183 ****** 2026-01-03 01:01:03.674637 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-03 01:01:03.674671 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-03 01:01:03.674678 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-03 01:01:03.674685 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-03 01:01:03.674705 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-03 01:01:03.674712 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-03 01:01:03.674718 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-03 01:01:03.674726 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-03 01:01:03.674733 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-03 01:01:03.674740 | orchestrator | 2026-01-03 01:01:03.674746 | orchestrator | TASK [Create share directory] ************************************************** 2026-01-03 01:01:03.674753 | orchestrator | Saturday 03 January 2026 00:59:55 +0000 (0:00:04.059) 0:00:08.243 ****** 2026-01-03 01:01:03.674760 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-03 01:01:03.674767 | orchestrator | 2026-01-03 01:01:03.674774 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-01-03 01:01:03.674780 | orchestrator | Saturday 03 January 2026 00:59:56 +0000 (0:00:01.021) 0:00:09.265 ****** 2026-01-03 01:01:03.674787 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-01-03 01:01:03.674795 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-03 01:01:03.674801 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-03 01:01:03.674808 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-01-03 01:01:03.674814 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-03 01:01:03.674821 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-01-03 01:01:03.674827 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-01-03 01:01:03.674834 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-01-03 01:01:03.674841 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-01-03 01:01:03.674847 | orchestrator | 2026-01-03 01:01:03.674854 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-01-03 01:01:03.674861 | orchestrator | Saturday 03 January 2026 01:00:09 +0000 (0:00:12.740) 0:00:22.005 ****** 2026-01-03 01:01:03.674867 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-01-03 01:01:03.674874 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-01-03 01:01:03.674881 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-03 01:01:03.674888 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-03 01:01:03.674909 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-03 01:01:03.674916 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-03 01:01:03.674923 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-01-03 01:01:03.674930 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-01-03 01:01:03.674936 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-01-03 01:01:03.674946 | orchestrator | 2026-01-03 01:01:03.674953 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-01-03 01:01:03.674960 | orchestrator | Saturday 03 January 2026 01:00:12 +0000 (0:00:03.139) 0:00:25.145 ****** 2026-01-03 01:01:03.674967 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-01-03 01:01:03.674975 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-03 01:01:03.674981 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-03 01:01:03.674988 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-01-03 01:01:03.674994 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-03 01:01:03.675000 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-01-03 01:01:03.675006 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-01-03 01:01:03.675014 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-01-03 01:01:03.675020 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-01-03 01:01:03.675027 | orchestrator | 2026-01-03 01:01:03.675034 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 01:01:03.675042 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:01:03.675050 | orchestrator | 2026-01-03 01:01:03.675057 | orchestrator | 2026-01-03 01:01:03.675064 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 01:01:03.675071 | orchestrator | Saturday 03 January 2026 01:00:19 +0000 (0:00:06.777) 0:00:31.922 ****** 2026-01-03 01:01:03.675083 | orchestrator | =============================================================================== 2026-01-03 01:01:03.675090 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.74s 2026-01-03 01:01:03.675097 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.78s 2026-01-03 01:01:03.675105 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.06s 2026-01-03 01:01:03.675112 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.02s 2026-01-03 01:01:03.675119 | orchestrator | Check if target directories exist --------------------------------------- 3.14s 2026-01-03 01:01:03.675125 | orchestrator | Create share directory -------------------------------------------------- 1.02s 2026-01-03 01:01:03.675132 | orchestrator | 2026-01-03 01:01:03.675140 | orchestrator | 2026-01-03 01:01:03.675147 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 01:01:03.675292 | orchestrator | 2026-01-03 01:01:03.675302 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 01:01:03.675310 | orchestrator | Saturday 03 January 2026 00:59:55 +0000 (0:00:00.316) 0:00:00.316 ****** 2026-01-03 01:01:03.675318 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:01:03.675326 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:01:03.675333 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:01:03.675339 | orchestrator | 2026-01-03 01:01:03.675346 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 01:01:03.675352 | orchestrator | Saturday 03 January 2026 00:59:56 +0000 (0:00:00.417) 0:00:00.734 ****** 2026-01-03 01:01:03.675359 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-01-03 01:01:03.675366 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-01-03 01:01:03.675373 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-01-03 01:01:03.675379 | orchestrator | 2026-01-03 01:01:03.675386 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-01-03 01:01:03.675392 | orchestrator | 2026-01-03 01:01:03.675399 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-03 01:01:03.675412 | orchestrator | Saturday 03 January 2026 00:59:56 +0000 (0:00:00.526) 0:00:01.260 ****** 2026-01-03 01:01:03.675419 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 01:01:03.675425 | orchestrator | 2026-01-03 01:01:03.675432 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-01-03 01:01:03.675439 | orchestrator | Saturday 03 January 2026 00:59:57 +0000 (0:00:00.598) 0:00:01.859 ****** 2026-01-03 01:01:03.675446 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (5 retries left). 2026-01-03 01:01:03.675453 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (4 retries left). 2026-01-03 01:01:03.675458 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (3 retries left). 2026-01-03 01:01:03.675464 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (2 retries left). 2026-01-03 01:01:03.675470 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (1 retries left). 2026-01-03 01:01:03.675508 | orchestrator | failed: [testbed-node-0] (item=designate (dns)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Designate DNS Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9001"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9001"}], "name": "designate", "type": "dns"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767402061.2213764-3225-279352964983751/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767402061.2213764-3225-279352964983751/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767402061.2213764-3225-279352964983751/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_keystone_service_payload_wkjoopqz/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_wkjoopqz/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_wkjoopqz/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_wkjoopqz/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_wkjoopqz/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-03 01:01:03.675525 | orchestrator | 2026-01-03 01:01:03.675531 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 01:01:03.675537 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-03 01:01:03.675544 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:01:03.675552 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:01:03.675563 | orchestrator | 2026-01-03 01:01:03.675570 | orchestrator | 2026-01-03 01:01:03.675576 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 01:01:03.675583 | orchestrator | Saturday 03 January 2026 01:01:02 +0000 (0:01:05.129) 0:01:06.989 ****** 2026-01-03 01:01:03.675618 | orchestrator | =============================================================================== 2026-01-03 01:01:03.675625 | orchestrator | service-ks-register : designate | Creating services -------------------- 65.13s 2026-01-03 01:01:03.675631 | orchestrator | designate : include_tasks ----------------------------------------------- 0.60s 2026-01-03 01:01:03.675637 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2026-01-03 01:01:03.675644 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2026-01-03 01:01:03.675650 | orchestrator | 2026-01-03 01:01:03 | INFO  | Task e95568eb-68fb-45a2-a357-06ac84d9340c is in state SUCCESS 2026-01-03 01:01:03.675657 | orchestrator | 2026-01-03 01:01:03 | INFO  | Task d4e2be00-6087-4eff-a178-bfcec6b52ff3 is in state SUCCESS 2026-01-03 01:01:03.675664 | orchestrator | 2026-01-03 01:01:03.675670 | orchestrator | 2026-01-03 01:01:03.675677 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 01:01:03.675683 | orchestrator | 2026-01-03 01:01:03.675690 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 01:01:03.675696 | orchestrator | Saturday 03 January 2026 00:59:55 +0000 (0:00:00.293) 0:00:00.293 ****** 2026-01-03 01:01:03.675703 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:01:03.675709 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:01:03.675716 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:01:03.675722 | orchestrator | 2026-01-03 01:01:03.675729 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 01:01:03.675736 | orchestrator | Saturday 03 January 2026 00:59:56 +0000 (0:00:00.449) 0:00:00.742 ****** 2026-01-03 01:01:03.675742 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-01-03 01:01:03.675754 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-01-03 01:01:03.675760 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-01-03 01:01:03.675767 | orchestrator | 2026-01-03 01:01:03.675774 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-01-03 01:01:03.675780 | orchestrator | 2026-01-03 01:01:03.675787 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-03 01:01:03.675794 | orchestrator | Saturday 03 January 2026 00:59:56 +0000 (0:00:00.532) 0:00:01.275 ****** 2026-01-03 01:01:03.675801 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 01:01:03.675807 | orchestrator | 2026-01-03 01:01:03.675814 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-01-03 01:01:03.675820 | orchestrator | Saturday 03 January 2026 00:59:57 +0000 (0:00:00.472) 0:00:01.748 ****** 2026-01-03 01:01:03.675826 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (5 retries left). 2026-01-03 01:01:03.675833 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (4 retries left). 2026-01-03 01:01:03.675840 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (3 retries left). 2026-01-03 01:01:03.675846 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (2 retries left). 2026-01-03 01:01:03.675853 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (1 retries left). 2026-01-03 01:01:03.675873 | orchestrator | failed: [testbed-node-0] (item=barbican (key-manager)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Barbican Key Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9311"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9311"}], "name": "barbican", "type": "key-manager"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767402060.8021138-3207-280288239097088/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767402060.8021138-3207-280288239097088/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767402060.8021138-3207-280288239097088/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_keystone_service_payload_4atfcen_/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_4atfcen_/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_4atfcen_/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_4atfcen_/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_4atfcen_/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-03 01:01:03.675891 | orchestrator | 2026-01-03 01:01:03.675898 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 01:01:03.675904 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-03 01:01:03.675910 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:01:03.675917 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:01:03.675924 | orchestrator | 2026-01-03 01:01:03.675930 | orchestrator | 2026-01-03 01:01:03.675937 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 01:01:03.675944 | orchestrator | Saturday 03 January 2026 01:01:02 +0000 (0:01:04.927) 0:01:06.676 ****** 2026-01-03 01:01:03.675953 | orchestrator | =============================================================================== 2026-01-03 01:01:03.675960 | orchestrator | service-ks-register : barbican | Creating services --------------------- 64.93s 2026-01-03 01:01:03.675967 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2026-01-03 01:01:03.675974 | orchestrator | barbican : include_tasks ------------------------------------------------ 0.47s 2026-01-03 01:01:03.675980 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.45s 2026-01-03 01:01:03.676878 | orchestrator | 2026-01-03 01:01:03 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:01:03.677864 | orchestrator | 2026-01-03 01:01:03 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 01:01:03.678106 | orchestrator | 2026-01-03 01:01:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:06.736726 | orchestrator | 2026-01-03 01:01:06 | INFO  | Task fc0e7eca-0855-4742-8fdf-4d7543aaf07c is in state STARTED 2026-01-03 01:01:06.739209 | orchestrator | 2026-01-03 01:01:06 | INFO  | Task f8692425-d160-4840-afb9-7e3310755f32 is in state STARTED 2026-01-03 01:01:06.740547 | orchestrator | 2026-01-03 01:01:06 | INFO  | Task 92a84c2a-f214-4fa5-a3ae-0cfd9dc28e84 is in state STARTED 2026-01-03 01:01:06.741935 | orchestrator | 2026-01-03 01:01:06 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:01:06.743325 | orchestrator | 2026-01-03 01:01:06 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state STARTED 2026-01-03 01:01:06.743388 | orchestrator | 2026-01-03 01:01:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:09.783803 | orchestrator | 2026-01-03 01:01:09 | INFO  | Task fc0e7eca-0855-4742-8fdf-4d7543aaf07c is in state STARTED 2026-01-03 01:01:09.784732 | orchestrator | 2026-01-03 01:01:09 | INFO  | Task f8692425-d160-4840-afb9-7e3310755f32 is in state STARTED 2026-01-03 01:01:09.785991 | orchestrator | 2026-01-03 01:01:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:01:09.787454 | orchestrator | 2026-01-03 01:01:09 | INFO  | Task 92a84c2a-f214-4fa5-a3ae-0cfd9dc28e84 is in state STARTED 2026-01-03 01:01:09.788422 | orchestrator | 2026-01-03 01:01:09 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:01:09.790252 | orchestrator | 2026-01-03 01:01:09.790295 | orchestrator | 2026-01-03 01:01:09.790304 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 01:01:09.790311 | orchestrator | 2026-01-03 01:01:09.790318 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 01:01:09.790325 | orchestrator | Saturday 03 January 2026 00:59:55 +0000 (0:00:00.262) 0:00:00.262 ****** 2026-01-03 01:01:09.790330 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:01:09.790335 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:01:09.790340 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:01:09.790343 | orchestrator | ok: [testbed-node-3] 2026-01-03 01:01:09.790347 | orchestrator | ok: [testbed-node-4] 2026-01-03 01:01:09.790352 | orchestrator | ok: [testbed-node-5] 2026-01-03 01:01:09.790355 | orchestrator | 2026-01-03 01:01:09.790359 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 01:01:09.790371 | orchestrator | Saturday 03 January 2026 00:59:56 +0000 (0:00:00.928) 0:00:01.191 ****** 2026-01-03 01:01:09.790375 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-01-03 01:01:09.790384 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-01-03 01:01:09.790388 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-01-03 01:01:09.790392 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-01-03 01:01:09.790395 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-01-03 01:01:09.790400 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-01-03 01:01:09.790404 | orchestrator | 2026-01-03 01:01:09.790408 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-01-03 01:01:09.790411 | orchestrator | 2026-01-03 01:01:09.790415 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-03 01:01:09.790419 | orchestrator | Saturday 03 January 2026 00:59:57 +0000 (0:00:00.600) 0:00:01.792 ****** 2026-01-03 01:01:09.790424 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 01:01:09.790428 | orchestrator | 2026-01-03 01:01:09.790432 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-01-03 01:01:09.790436 | orchestrator | Saturday 03 January 2026 00:59:58 +0000 (0:00:00.896) 0:00:02.689 ****** 2026-01-03 01:01:09.790440 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:01:09.790444 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:01:09.790448 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:01:09.790451 | orchestrator | ok: [testbed-node-3] 2026-01-03 01:01:09.790455 | orchestrator | ok: [testbed-node-4] 2026-01-03 01:01:09.790459 | orchestrator | ok: [testbed-node-5] 2026-01-03 01:01:09.790477 | orchestrator | 2026-01-03 01:01:09.790481 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-01-03 01:01:09.790485 | orchestrator | Saturday 03 January 2026 00:59:59 +0000 (0:00:01.022) 0:00:03.711 ****** 2026-01-03 01:01:09.790494 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:01:09.790498 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:01:09.790502 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:01:09.790510 | orchestrator | ok: [testbed-node-3] 2026-01-03 01:01:09.790514 | orchestrator | ok: [testbed-node-4] 2026-01-03 01:01:09.790517 | orchestrator | ok: [testbed-node-5] 2026-01-03 01:01:09.790521 | orchestrator | 2026-01-03 01:01:09.790525 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-01-03 01:01:09.790529 | orchestrator | Saturday 03 January 2026 01:00:00 +0000 (0:00:01.139) 0:00:04.850 ****** 2026-01-03 01:01:09.790533 | orchestrator | ok: [testbed-node-0] => { 2026-01-03 01:01:09.790537 | orchestrator |  "changed": false, 2026-01-03 01:01:09.790541 | orchestrator |  "msg": "All assertions passed" 2026-01-03 01:01:09.790545 | orchestrator | } 2026-01-03 01:01:09.790549 | orchestrator | ok: [testbed-node-1] => { 2026-01-03 01:01:09.790553 | orchestrator |  "changed": false, 2026-01-03 01:01:09.790557 | orchestrator |  "msg": "All assertions passed" 2026-01-03 01:01:09.790560 | orchestrator | } 2026-01-03 01:01:09.790564 | orchestrator | ok: [testbed-node-2] => { 2026-01-03 01:01:09.790568 | orchestrator |  "changed": false, 2026-01-03 01:01:09.790572 | orchestrator |  "msg": "All assertions passed" 2026-01-03 01:01:09.790576 | orchestrator | } 2026-01-03 01:01:09.790579 | orchestrator | ok: [testbed-node-3] => { 2026-01-03 01:01:09.790583 | orchestrator |  "changed": false, 2026-01-03 01:01:09.790587 | orchestrator |  "msg": "All assertions passed" 2026-01-03 01:01:09.790591 | orchestrator | } 2026-01-03 01:01:09.790595 | orchestrator | ok: [testbed-node-4] => { 2026-01-03 01:01:09.790611 | orchestrator |  "changed": false, 2026-01-03 01:01:09.790618 | orchestrator |  "msg": "All assertions passed" 2026-01-03 01:01:09.790624 | orchestrator | } 2026-01-03 01:01:09.790628 | orchestrator | ok: [testbed-node-5] => { 2026-01-03 01:01:09.790632 | orchestrator |  "changed": false, 2026-01-03 01:01:09.790636 | orchestrator |  "msg": "All assertions passed" 2026-01-03 01:01:09.790640 | orchestrator | } 2026-01-03 01:01:09.790644 | orchestrator | 2026-01-03 01:01:09.790648 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-01-03 01:01:09.790652 | orchestrator | Saturday 03 January 2026 01:00:01 +0000 (0:00:00.728) 0:00:05.578 ****** 2026-01-03 01:01:09.790655 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:01:09.790659 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:01:09.790663 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:01:09.790668 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:01:09.790674 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:01:09.790680 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:01:09.790687 | orchestrator | 2026-01-03 01:01:09.790693 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-01-03 01:01:09.790698 | orchestrator | Saturday 03 January 2026 01:00:01 +0000 (0:00:00.519) 0:00:06.098 ****** 2026-01-03 01:01:09.790705 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (5 retries left). 2026-01-03 01:01:09.790711 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (4 retries left). 2026-01-03 01:01:09.790717 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (3 retries left). 2026-01-03 01:01:09.790731 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (2 retries left). 2026-01-03 01:01:09.790736 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (1 retries left). 2026-01-03 01:01:09.790767 | orchestrator | failed: [testbed-node-0] (item=neutron (network)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Openstack Networking", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9696"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9696"}], "name": "neutron", "type": "network"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767402066.342502-3264-67457764864099/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767402066.342502-3264-67457764864099/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767402066.342502-3264-67457764864099/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_keystone_service_payload_69kkdbug/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_69kkdbug/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_69kkdbug/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_69kkdbug/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_69kkdbug/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-03 01:01:09.790787 | orchestrator | 2026-01-03 01:01:09.790794 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 01:01:09.790801 | orchestrator | testbed-node-0 : ok=6  changed=0 unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-01-03 01:01:09.790807 | orchestrator | testbed-node-1 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 01:01:09.790811 | orchestrator | testbed-node-2 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 01:01:09.790815 | orchestrator | testbed-node-3 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 01:01:09.790819 | orchestrator | testbed-node-4 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 01:01:09.790823 | orchestrator | testbed-node-5 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 01:01:09.790827 | orchestrator | 2026-01-03 01:01:09.790833 | orchestrator | 2026-01-03 01:01:09.790839 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 01:01:09.790848 | orchestrator | Saturday 03 January 2026 01:01:07 +0000 (0:01:06.148) 0:01:12.247 ****** 2026-01-03 01:01:09.790856 | orchestrator | =============================================================================== 2026-01-03 01:01:09.790862 | orchestrator | service-ks-register : neutron | Creating services ---------------------- 66.15s 2026-01-03 01:01:09.790869 | orchestrator | neutron : Get container volume facts ------------------------------------ 1.14s 2026-01-03 01:01:09.790875 | orchestrator | neutron : Get container facts ------------------------------------------- 1.02s 2026-01-03 01:01:09.790881 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.93s 2026-01-03 01:01:09.790892 | orchestrator | neutron : include_tasks ------------------------------------------------- 0.90s 2026-01-03 01:01:09.790901 | orchestrator | neutron : Check for ML2/OVN presence ------------------------------------ 0.73s 2026-01-03 01:01:09.790915 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2026-01-03 01:01:09.790925 | orchestrator | neutron : Check for ML2/OVS presence ------------------------------------ 0.52s 2026-01-03 01:01:09.790933 | orchestrator | 2026-01-03 01:01:09 | INFO  | Task 1868b6a2-e405-4fb4-8f71-13e1908a938e is in state SUCCESS 2026-01-03 01:01:09.790940 | orchestrator | 2026-01-03 01:01:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:12.840310 | orchestrator | 2026-01-03 01:01:12 | INFO  | Task fc0e7eca-0855-4742-8fdf-4d7543aaf07c is in state STARTED 2026-01-03 01:01:12.840861 | orchestrator | 2026-01-03 01:01:12 | INFO  | Task f8692425-d160-4840-afb9-7e3310755f32 is in state STARTED 2026-01-03 01:01:12.841799 | orchestrator | 2026-01-03 01:01:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:01:12.841841 | orchestrator | 2026-01-03 01:01:12 | INFO  | Task 92a84c2a-f214-4fa5-a3ae-0cfd9dc28e84 is in state STARTED 2026-01-03 01:01:12.842986 | orchestrator | 2026-01-03 01:01:12 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:01:12.843017 | orchestrator | 2026-01-03 01:01:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:15.880834 | orchestrator | 2026-01-03 01:01:15 | INFO  | Task fc0e7eca-0855-4742-8fdf-4d7543aaf07c is in state STARTED 2026-01-03 01:01:15.882823 | orchestrator | 2026-01-03 01:01:15 | INFO  | Task f8692425-d160-4840-afb9-7e3310755f32 is in state STARTED 2026-01-03 01:01:15.885384 | orchestrator | 2026-01-03 01:01:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:01:15.889134 | orchestrator | 2026-01-03 01:01:15 | INFO  | Task 92a84c2a-f214-4fa5-a3ae-0cfd9dc28e84 is in state STARTED 2026-01-03 01:01:15.891837 | orchestrator | 2026-01-03 01:01:15 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:01:15.891896 | orchestrator | 2026-01-03 01:01:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:18.945550 | orchestrator | 2026-01-03 01:01:18 | INFO  | Task fc0e7eca-0855-4742-8fdf-4d7543aaf07c is in state SUCCESS 2026-01-03 01:01:18.945658 | orchestrator | 2026-01-03 01:01:18 | INFO  | Task f8692425-d160-4840-afb9-7e3310755f32 is in state STARTED 2026-01-03 01:01:18.949085 | orchestrator | 2026-01-03 01:01:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:01:18.950133 | orchestrator | 2026-01-03 01:01:18 | INFO  | Task 92a84c2a-f214-4fa5-a3ae-0cfd9dc28e84 is in state STARTED 2026-01-03 01:01:18.952886 | orchestrator | 2026-01-03 01:01:18 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:01:18.953022 | orchestrator | 2026-01-03 01:01:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:22.006166 | orchestrator | 2026-01-03 01:01:22 | INFO  | Task f8692425-d160-4840-afb9-7e3310755f32 is in state STARTED 2026-01-03 01:01:22.007683 | orchestrator | 2026-01-03 01:01:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:01:22.009534 | orchestrator | 2026-01-03 01:01:22 | INFO  | Task 92a84c2a-f214-4fa5-a3ae-0cfd9dc28e84 is in state STARTED 2026-01-03 01:01:22.011712 | orchestrator | 2026-01-03 01:01:22 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:01:22.011860 | orchestrator | 2026-01-03 01:01:22 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:01:22.011903 | orchestrator | 2026-01-03 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:25.085809 | orchestrator | 2026-01-03 01:01:25 | INFO  | Task f8692425-d160-4840-afb9-7e3310755f32 is in state STARTED 2026-01-03 01:01:25.088560 | orchestrator | 2026-01-03 01:01:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:01:25.089819 | orchestrator | 2026-01-03 01:01:25 | INFO  | Task 92a84c2a-f214-4fa5-a3ae-0cfd9dc28e84 is in state STARTED 2026-01-03 01:01:25.090293 | orchestrator | 2026-01-03 01:01:25 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:01:25.091750 | orchestrator | 2026-01-03 01:01:25 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:01:25.091869 | orchestrator | 2026-01-03 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:28.141625 | orchestrator | 2026-01-03 01:01:28 | INFO  | Task f8692425-d160-4840-afb9-7e3310755f32 is in state STARTED 2026-01-03 01:01:28.142341 | orchestrator | 2026-01-03 01:01:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:01:28.145237 | orchestrator | 2026-01-03 01:01:28 | INFO  | Task 92a84c2a-f214-4fa5-a3ae-0cfd9dc28e84 is in state STARTED 2026-01-03 01:01:28.146250 | orchestrator | 2026-01-03 01:01:28 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:01:28.149985 | orchestrator | 2026-01-03 01:01:28 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:01:28.150088 | orchestrator | 2026-01-03 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:31.208121 | orchestrator | 2026-01-03 01:01:31 | INFO  | Task f8692425-d160-4840-afb9-7e3310755f32 is in state STARTED 2026-01-03 01:01:31.211015 | orchestrator | 2026-01-03 01:01:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:01:31.212421 | orchestrator | 2026-01-03 01:01:31 | INFO  | Task 92a84c2a-f214-4fa5-a3ae-0cfd9dc28e84 is in state STARTED 2026-01-03 01:01:31.213909 | orchestrator | 2026-01-03 01:01:31 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:01:31.215253 | orchestrator | 2026-01-03 01:01:31 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:01:31.215296 | orchestrator | 2026-01-03 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:34.263865 | orchestrator | 2026-01-03 01:01:34 | INFO  | Task f8692425-d160-4840-afb9-7e3310755f32 is in state STARTED 2026-01-03 01:01:34.264342 | orchestrator | 2026-01-03 01:01:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:01:34.267990 | orchestrator | 2026-01-03 01:01:34 | INFO  | Task 92a84c2a-f214-4fa5-a3ae-0cfd9dc28e84 is in state STARTED 2026-01-03 01:01:34.275124 | orchestrator | 2026-01-03 01:01:34 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:01:34.285121 | orchestrator | 2026-01-03 01:01:34 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:01:34.285169 | orchestrator | 2026-01-03 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:37.340691 | orchestrator | 2026-01-03 01:01:37 | INFO  | Task f8692425-d160-4840-afb9-7e3310755f32 is in state STARTED 2026-01-03 01:01:37.343705 | orchestrator | 2026-01-03 01:01:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:01:37.348390 | orchestrator | 2026-01-03 01:01:37 | INFO  | Task 92a84c2a-f214-4fa5-a3ae-0cfd9dc28e84 is in state STARTED 2026-01-03 01:01:37.350418 | orchestrator | 2026-01-03 01:01:37 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:01:37.351845 | orchestrator | 2026-01-03 01:01:37 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:01:37.351890 | orchestrator | 2026-01-03 01:01:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:40.402100 | orchestrator | 2026-01-03 01:01:40 | INFO  | Task f8692425-d160-4840-afb9-7e3310755f32 is in state STARTED 2026-01-03 01:01:40.405616 | orchestrator | 2026-01-03 01:01:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:01:40.408036 | orchestrator | 2026-01-03 01:01:40 | INFO  | Task 92a84c2a-f214-4fa5-a3ae-0cfd9dc28e84 is in state STARTED 2026-01-03 01:01:40.409383 | orchestrator | 2026-01-03 01:01:40 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:01:40.410544 | orchestrator | 2026-01-03 01:01:40 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:01:40.410759 | orchestrator | 2026-01-03 01:01:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:43.465171 | orchestrator | 2026-01-03 01:01:43 | INFO  | Task f8692425-d160-4840-afb9-7e3310755f32 is in state STARTED 2026-01-03 01:01:43.467030 | orchestrator | 2026-01-03 01:01:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:01:43.469390 | orchestrator | 2026-01-03 01:01:43 | INFO  | Task 92a84c2a-f214-4fa5-a3ae-0cfd9dc28e84 is in state STARTED 2026-01-03 01:01:43.471241 | orchestrator | 2026-01-03 01:01:43 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:01:43.472806 | orchestrator | 2026-01-03 01:01:43 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:01:43.472852 | orchestrator | 2026-01-03 01:01:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:46.520745 | orchestrator | 2026-01-03 01:01:46 | INFO  | Task f8692425-d160-4840-afb9-7e3310755f32 is in state STARTED 2026-01-03 01:01:46.521400 | orchestrator | 2026-01-03 01:01:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:01:46.522238 | orchestrator | 2026-01-03 01:01:46 | INFO  | Task 92a84c2a-f214-4fa5-a3ae-0cfd9dc28e84 is in state STARTED 2026-01-03 01:01:46.523390 | orchestrator | 2026-01-03 01:01:46 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:01:46.524873 | orchestrator | 2026-01-03 01:01:46 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:01:46.524911 | orchestrator | 2026-01-03 01:01:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:49.574411 | orchestrator | 2026-01-03 01:01:49 | INFO  | Task f8692425-d160-4840-afb9-7e3310755f32 is in state STARTED 2026-01-03 01:01:49.578698 | orchestrator | 2026-01-03 01:01:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:01:49.580883 | orchestrator | 2026-01-03 01:01:49 | INFO  | Task 92a84c2a-f214-4fa5-a3ae-0cfd9dc28e84 is in state STARTED 2026-01-03 01:01:49.582775 | orchestrator | 2026-01-03 01:01:49 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:01:49.585021 | orchestrator | 2026-01-03 01:01:49 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:01:49.585077 | orchestrator | 2026-01-03 01:01:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:52.640299 | orchestrator | 2026-01-03 01:01:52 | INFO  | Task f8692425-d160-4840-afb9-7e3310755f32 is in state STARTED 2026-01-03 01:01:52.641324 | orchestrator | 2026-01-03 01:01:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:01:52.643048 | orchestrator | 2026-01-03 01:01:52 | INFO  | Task 92a84c2a-f214-4fa5-a3ae-0cfd9dc28e84 is in state STARTED 2026-01-03 01:01:52.644855 | orchestrator | 2026-01-03 01:01:52 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:01:52.646088 | orchestrator | 2026-01-03 01:01:52 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:01:52.646114 | orchestrator | 2026-01-03 01:01:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:55.702885 | orchestrator | 2026-01-03 01:01:55 | INFO  | Task f8692425-d160-4840-afb9-7e3310755f32 is in state STARTED 2026-01-03 01:01:55.703780 | orchestrator | 2026-01-03 01:01:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:01:55.705721 | orchestrator | 2026-01-03 01:01:55 | INFO  | Task 92a84c2a-f214-4fa5-a3ae-0cfd9dc28e84 is in state STARTED 2026-01-03 01:01:55.706882 | orchestrator | 2026-01-03 01:01:55 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:01:55.708156 | orchestrator | 2026-01-03 01:01:55 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:01:55.708411 | orchestrator | 2026-01-03 01:01:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:01:58.763204 | orchestrator | 2026-01-03 01:01:58 | INFO  | Task f8692425-d160-4840-afb9-7e3310755f32 is in state STARTED 2026-01-03 01:01:58.765025 | orchestrator | 2026-01-03 01:01:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:01:58.766785 | orchestrator | 2026-01-03 01:01:58 | INFO  | Task 92a84c2a-f214-4fa5-a3ae-0cfd9dc28e84 is in state STARTED 2026-01-03 01:01:58.768732 | orchestrator | 2026-01-03 01:01:58 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:01:58.770164 | orchestrator | 2026-01-03 01:01:58 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:01:58.770197 | orchestrator | 2026-01-03 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:01.816436 | orchestrator | 2026-01-03 01:02:01 | INFO  | Task f8692425-d160-4840-afb9-7e3310755f32 is in state STARTED 2026-01-03 01:02:01.818624 | orchestrator | 2026-01-03 01:02:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:02:01.820364 | orchestrator | 2026-01-03 01:02:01 | INFO  | Task 92a84c2a-f214-4fa5-a3ae-0cfd9dc28e84 is in state STARTED 2026-01-03 01:02:01.822238 | orchestrator | 2026-01-03 01:02:01 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:02:01.824860 | orchestrator | 2026-01-03 01:02:01 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:02:01.824914 | orchestrator | 2026-01-03 01:02:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:04.872305 | orchestrator | 2026-01-03 01:02:04 | INFO  | Task f8692425-d160-4840-afb9-7e3310755f32 is in state STARTED 2026-01-03 01:02:04.874348 | orchestrator | 2026-01-03 01:02:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:02:04.876617 | orchestrator | 2026-01-03 01:02:04 | INFO  | Task 92a84c2a-f214-4fa5-a3ae-0cfd9dc28e84 is in state STARTED 2026-01-03 01:02:04.879253 | orchestrator | 2026-01-03 01:02:04 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:02:04.881003 | orchestrator | 2026-01-03 01:02:04 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:02:04.881071 | orchestrator | 2026-01-03 01:02:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:07.928010 | orchestrator | 2026-01-03 01:02:07 | INFO  | Task f8692425-d160-4840-afb9-7e3310755f32 is in state STARTED 2026-01-03 01:02:07.930842 | orchestrator | 2026-01-03 01:02:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:02:07.933308 | orchestrator | 2026-01-03 01:02:07 | INFO  | Task 92a84c2a-f214-4fa5-a3ae-0cfd9dc28e84 is in state STARTED 2026-01-03 01:02:07.936913 | orchestrator | 2026-01-03 01:02:07 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:02:07.939020 | orchestrator | 2026-01-03 01:02:07 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:02:07.939077 | orchestrator | 2026-01-03 01:02:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:10.996509 | orchestrator | 2026-01-03 01:02:10 | INFO  | Task f8692425-d160-4840-afb9-7e3310755f32 is in state STARTED 2026-01-03 01:02:10.998209 | orchestrator | 2026-01-03 01:02:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:02:11.000544 | orchestrator | 2026-01-03 01:02:11 | INFO  | Task 92a84c2a-f214-4fa5-a3ae-0cfd9dc28e84 is in state STARTED 2026-01-03 01:02:11.002179 | orchestrator | 2026-01-03 01:02:11 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:02:11.003961 | orchestrator | 2026-01-03 01:02:11 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:02:11.004000 | orchestrator | 2026-01-03 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:14.057043 | orchestrator | 2026-01-03 01:02:14 | INFO  | Task f8692425-d160-4840-afb9-7e3310755f32 is in state STARTED 2026-01-03 01:02:14.059482 | orchestrator | 2026-01-03 01:02:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:02:14.061702 | orchestrator | 2026-01-03 01:02:14 | INFO  | Task 92a84c2a-f214-4fa5-a3ae-0cfd9dc28e84 is in state STARTED 2026-01-03 01:02:14.063491 | orchestrator | 2026-01-03 01:02:14 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:02:14.064972 | orchestrator | 2026-01-03 01:02:14 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state STARTED 2026-01-03 01:02:14.064999 | orchestrator | 2026-01-03 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:17.111912 | orchestrator | 2026-01-03 01:02:17.112050 | orchestrator | 2026-01-03 01:02:17.112064 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-01-03 01:02:17.112072 | orchestrator | 2026-01-03 01:02:17.112079 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-01-03 01:02:17.112086 | orchestrator | Saturday 03 January 2026 01:00:24 +0000 (0:00:00.169) 0:00:00.169 ****** 2026-01-03 01:02:17.112094 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-01-03 01:02:17.112102 | orchestrator | 2026-01-03 01:02:17.112108 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-01-03 01:02:17.112115 | orchestrator | Saturday 03 January 2026 01:00:24 +0000 (0:00:00.159) 0:00:00.329 ****** 2026-01-03 01:02:17.112123 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-01-03 01:02:17.112131 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-01-03 01:02:17.112139 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-01-03 01:02:17.112146 | orchestrator | 2026-01-03 01:02:17.112153 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-01-03 01:02:17.112159 | orchestrator | Saturday 03 January 2026 01:00:25 +0000 (0:00:01.019) 0:00:01.348 ****** 2026-01-03 01:02:17.112167 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-01-03 01:02:17.112200 | orchestrator | 2026-01-03 01:02:17.112207 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-01-03 01:02:17.112214 | orchestrator | Saturday 03 January 2026 01:00:26 +0000 (0:00:01.175) 0:00:02.524 ****** 2026-01-03 01:02:17.112221 | orchestrator | changed: [testbed-manager] 2026-01-03 01:02:17.112229 | orchestrator | 2026-01-03 01:02:17.112254 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-01-03 01:02:17.112261 | orchestrator | Saturday 03 January 2026 01:00:27 +0000 (0:00:00.814) 0:00:03.338 ****** 2026-01-03 01:02:17.112267 | orchestrator | changed: [testbed-manager] 2026-01-03 01:02:17.112273 | orchestrator | 2026-01-03 01:02:17.112280 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-01-03 01:02:17.112286 | orchestrator | Saturday 03 January 2026 01:00:28 +0000 (0:00:00.874) 0:00:04.213 ****** 2026-01-03 01:02:17.112292 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-01-03 01:02:17.112298 | orchestrator | ok: [testbed-manager] 2026-01-03 01:02:17.112304 | orchestrator | 2026-01-03 01:02:17.112311 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-01-03 01:02:17.112318 | orchestrator | Saturday 03 January 2026 01:01:09 +0000 (0:00:41.216) 0:00:45.429 ****** 2026-01-03 01:02:17.112324 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-01-03 01:02:17.112332 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-01-03 01:02:17.112339 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-01-03 01:02:17.112346 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-01-03 01:02:17.112354 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-01-03 01:02:17.112362 | orchestrator | 2026-01-03 01:02:17.112369 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-01-03 01:02:17.112376 | orchestrator | Saturday 03 January 2026 01:01:13 +0000 (0:00:03.906) 0:00:49.335 ****** 2026-01-03 01:02:17.112382 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-01-03 01:02:17.112387 | orchestrator | 2026-01-03 01:02:17.112394 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-01-03 01:02:17.112401 | orchestrator | Saturday 03 January 2026 01:01:13 +0000 (0:00:00.442) 0:00:49.778 ****** 2026-01-03 01:02:17.112407 | orchestrator | skipping: [testbed-manager] 2026-01-03 01:02:17.112414 | orchestrator | 2026-01-03 01:02:17.112420 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-01-03 01:02:17.112428 | orchestrator | Saturday 03 January 2026 01:01:13 +0000 (0:00:00.126) 0:00:49.904 ****** 2026-01-03 01:02:17.112434 | orchestrator | skipping: [testbed-manager] 2026-01-03 01:02:17.112441 | orchestrator | 2026-01-03 01:02:17.112448 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-01-03 01:02:17.112455 | orchestrator | Saturday 03 January 2026 01:01:14 +0000 (0:00:00.408) 0:00:50.313 ****** 2026-01-03 01:02:17.112462 | orchestrator | changed: [testbed-manager] 2026-01-03 01:02:17.112469 | orchestrator | 2026-01-03 01:02:17.112476 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-01-03 01:02:17.112483 | orchestrator | Saturday 03 January 2026 01:01:15 +0000 (0:00:01.320) 0:00:51.633 ****** 2026-01-03 01:02:17.112489 | orchestrator | changed: [testbed-manager] 2026-01-03 01:02:17.112496 | orchestrator | 2026-01-03 01:02:17.112502 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-01-03 01:02:17.112509 | orchestrator | Saturday 03 January 2026 01:01:16 +0000 (0:00:00.679) 0:00:52.312 ****** 2026-01-03 01:02:17.112515 | orchestrator | changed: [testbed-manager] 2026-01-03 01:02:17.112521 | orchestrator | 2026-01-03 01:02:17.112528 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-01-03 01:02:17.112534 | orchestrator | Saturday 03 January 2026 01:01:16 +0000 (0:00:00.658) 0:00:52.970 ****** 2026-01-03 01:02:17.112541 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-01-03 01:02:17.112548 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-01-03 01:02:17.112566 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-01-03 01:02:17.112572 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-01-03 01:02:17.112578 | orchestrator | 2026-01-03 01:02:17.112586 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 01:02:17.112595 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-03 01:02:17.112604 | orchestrator | 2026-01-03 01:02:17.112611 | orchestrator | 2026-01-03 01:02:17.112640 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 01:02:17.112648 | orchestrator | Saturday 03 January 2026 01:01:18 +0000 (0:00:01.649) 0:00:54.620 ****** 2026-01-03 01:02:17.112655 | orchestrator | =============================================================================== 2026-01-03 01:02:17.112661 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.22s 2026-01-03 01:02:17.112668 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.91s 2026-01-03 01:02:17.112675 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.65s 2026-01-03 01:02:17.112682 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.32s 2026-01-03 01:02:17.112688 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.18s 2026-01-03 01:02:17.112695 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.02s 2026-01-03 01:02:17.112701 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.87s 2026-01-03 01:02:17.112707 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.81s 2026-01-03 01:02:17.112712 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.68s 2026-01-03 01:02:17.112771 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.66s 2026-01-03 01:02:17.112913 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.44s 2026-01-03 01:02:17.112922 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.41s 2026-01-03 01:02:17.112928 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.16s 2026-01-03 01:02:17.112933 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-01-03 01:02:17.112939 | orchestrator | 2026-01-03 01:02:17.112946 | orchestrator | 2026-01-03 01:02:17.112960 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 01:02:17.112966 | orchestrator | 2026-01-03 01:02:17.112973 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 01:02:17.112979 | orchestrator | Saturday 03 January 2026 01:01:07 +0000 (0:00:00.262) 0:00:00.262 ****** 2026-01-03 01:02:17.112985 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:02:17.112991 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:02:17.112997 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:02:17.113003 | orchestrator | 2026-01-03 01:02:17.113009 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 01:02:17.113015 | orchestrator | Saturday 03 January 2026 01:01:07 +0000 (0:00:00.308) 0:00:00.571 ****** 2026-01-03 01:02:17.113021 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-01-03 01:02:17.113027 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-01-03 01:02:17.113033 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-01-03 01:02:17.113039 | orchestrator | 2026-01-03 01:02:17.113045 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-01-03 01:02:17.113051 | orchestrator | 2026-01-03 01:02:17.113057 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-03 01:02:17.113063 | orchestrator | Saturday 03 January 2026 01:01:07 +0000 (0:00:00.457) 0:00:01.029 ****** 2026-01-03 01:02:17.113069 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 01:02:17.113088 | orchestrator | 2026-01-03 01:02:17.113094 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-01-03 01:02:17.113099 | orchestrator | Saturday 03 January 2026 01:01:08 +0000 (0:00:00.581) 0:00:01.610 ****** 2026-01-03 01:02:17.113106 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (5 retries left). 2026-01-03 01:02:17.113112 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (4 retries left). 2026-01-03 01:02:17.113118 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (3 retries left). 2026-01-03 01:02:17.113125 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (2 retries left). 2026-01-03 01:02:17.113130 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (1 retries left). 2026-01-03 01:02:17.113172 | orchestrator | failed: [testbed-node-0] (item=magnum (container-infra)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Container Infrastructure Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9511/v1"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9511/v1"}], "name": "magnum", "type": "container-infra"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767402133.0453084-3678-276393225965287/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767402133.0453084-3678-276393225965287/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767402133.0453084-3678-276393225965287/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_keystone_service_payload_56gmlqwt/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_56gmlqwt/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_56gmlqwt/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_56gmlqwt/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_56gmlqwt/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-03 01:02:17.113187 | orchestrator | 2026-01-03 01:02:17.113193 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 01:02:17.113203 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-03 01:02:17.113211 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:02:17.113219 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:02:17.113225 | orchestrator | 2026-01-03 01:02:17.113230 | orchestrator | 2026-01-03 01:02:17.113236 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 01:02:17.113243 | orchestrator | Saturday 03 January 2026 01:02:14 +0000 (0:01:05.653) 0:01:07.263 ****** 2026-01-03 01:02:17.113254 | orchestrator | =============================================================================== 2026-01-03 01:02:17.113260 | orchestrator | service-ks-register : magnum | Creating services ----------------------- 65.65s 2026-01-03 01:02:17.113266 | orchestrator | magnum : include_tasks -------------------------------------------------- 0.58s 2026-01-03 01:02:17.113272 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-01-03 01:02:17.113278 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-01-03 01:02:17.113285 | orchestrator | 2026-01-03 01:02:17 | INFO  | Task f8692425-d160-4840-afb9-7e3310755f32 is in state SUCCESS 2026-01-03 01:02:17.113291 | orchestrator | 2026-01-03 01:02:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:02:17.113297 | orchestrator | 2026-01-03 01:02:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:02:17.114300 | orchestrator | 2026-01-03 01:02:17 | INFO  | Task 92a84c2a-f214-4fa5-a3ae-0cfd9dc28e84 is in state SUCCESS 2026-01-03 01:02:17.114868 | orchestrator | 2026-01-03 01:02:17.114897 | orchestrator | 2026-01-03 01:02:17.114904 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 01:02:17.114912 | orchestrator | 2026-01-03 01:02:17.114919 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 01:02:17.114926 | orchestrator | Saturday 03 January 2026 01:01:07 +0000 (0:00:00.262) 0:00:00.262 ****** 2026-01-03 01:02:17.114932 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:02:17.114940 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:02:17.114946 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:02:17.114952 | orchestrator | 2026-01-03 01:02:17.114958 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 01:02:17.114965 | orchestrator | Saturday 03 January 2026 01:01:07 +0000 (0:00:00.328) 0:00:00.591 ****** 2026-01-03 01:02:17.114972 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-01-03 01:02:17.114979 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-01-03 01:02:17.114985 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-01-03 01:02:17.114992 | orchestrator | 2026-01-03 01:02:17.114998 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-01-03 01:02:17.115004 | orchestrator | 2026-01-03 01:02:17.115010 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-03 01:02:17.115017 | orchestrator | Saturday 03 January 2026 01:01:07 +0000 (0:00:00.515) 0:00:01.106 ****** 2026-01-03 01:02:17.115023 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 01:02:17.115030 | orchestrator | 2026-01-03 01:02:17.115036 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-01-03 01:02:17.115042 | orchestrator | Saturday 03 January 2026 01:01:08 +0000 (0:00:00.515) 0:00:01.621 ****** 2026-01-03 01:02:17.115049 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (5 retries left). 2026-01-03 01:02:17.115056 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (4 retries left). 2026-01-03 01:02:17.115062 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (3 retries left). 2026-01-03 01:02:17.115068 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (2 retries left). 2026-01-03 01:02:17.115074 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (1 retries left). 2026-01-03 01:02:17.115116 | orchestrator | failed: [testbed-node-0] (item=placement (placement)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Placement Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:8780"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:8780"}], "name": "placement", "type": "placement"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767402132.8821163-3667-178920196990771/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767402132.8821163-3667-178920196990771/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767402132.8821163-3667-178920196990771/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_keystone_service_payload_p2uqayqg/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_p2uqayqg/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_p2uqayqg/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_p2uqayqg/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_p2uqayqg/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-03 01:02:17.115145 | orchestrator | 2026-01-03 01:02:17.115151 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 01:02:17.115160 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-03 01:02:17.115167 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:02:17.115175 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:02:17.115181 | orchestrator | 2026-01-03 01:02:17.115187 | orchestrator | 2026-01-03 01:02:17.115193 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 01:02:17.115199 | orchestrator | Saturday 03 January 2026 01:02:14 +0000 (0:01:05.764) 0:01:07.386 ****** 2026-01-03 01:02:17.115205 | orchestrator | =============================================================================== 2026-01-03 01:02:17.115211 | orchestrator | service-ks-register : placement | Creating services -------------------- 65.76s 2026-01-03 01:02:17.115217 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.52s 2026-01-03 01:02:17.115223 | orchestrator | placement : include_tasks ----------------------------------------------- 0.52s 2026-01-03 01:02:17.115229 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-01-03 01:02:17.116204 | orchestrator | 2026-01-03 01:02:17 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:02:17.117651 | orchestrator | 2026-01-03 01:02:17 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:02:17.118790 | orchestrator | 2026-01-03 01:02:17 | INFO  | Task 4460bc5e-78b4-4f94-b286-42f6baddc091 is in state SUCCESS 2026-01-03 01:02:17.118979 | orchestrator | 2026-01-03 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:20.158321 | orchestrator | 2026-01-03 01:02:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:02:20.158526 | orchestrator | 2026-01-03 01:02:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:02:20.159684 | orchestrator | 2026-01-03 01:02:20 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:02:20.161240 | orchestrator | 2026-01-03 01:02:20 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:02:20.161291 | orchestrator | 2026-01-03 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:23.193100 | orchestrator | 2026-01-03 01:02:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:02:23.194791 | orchestrator | 2026-01-03 01:02:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:02:23.196549 | orchestrator | 2026-01-03 01:02:23 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:02:23.196597 | orchestrator | 2026-01-03 01:02:23 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:02:23.196603 | orchestrator | 2026-01-03 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:26.260058 | orchestrator | 2026-01-03 01:02:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:02:26.262595 | orchestrator | 2026-01-03 01:02:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:02:26.267624 | orchestrator | 2026-01-03 01:02:26 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:02:26.270159 | orchestrator | 2026-01-03 01:02:26 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:02:26.270356 | orchestrator | 2026-01-03 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:29.321160 | orchestrator | 2026-01-03 01:02:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:02:29.322488 | orchestrator | 2026-01-03 01:02:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:02:29.325050 | orchestrator | 2026-01-03 01:02:29 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:02:29.325949 | orchestrator | 2026-01-03 01:02:29 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:02:29.326307 | orchestrator | 2026-01-03 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:32.376983 | orchestrator | 2026-01-03 01:02:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:02:32.382923 | orchestrator | 2026-01-03 01:02:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:02:32.383006 | orchestrator | 2026-01-03 01:02:32 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:02:32.383018 | orchestrator | 2026-01-03 01:02:32 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:02:32.383027 | orchestrator | 2026-01-03 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:35.422230 | orchestrator | 2026-01-03 01:02:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:02:35.422996 | orchestrator | 2026-01-03 01:02:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:02:35.426206 | orchestrator | 2026-01-03 01:02:35 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:02:35.428289 | orchestrator | 2026-01-03 01:02:35 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:02:35.428373 | orchestrator | 2026-01-03 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:38.470984 | orchestrator | 2026-01-03 01:02:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:02:38.471075 | orchestrator | 2026-01-03 01:02:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:02:38.472489 | orchestrator | 2026-01-03 01:02:38 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:02:38.473627 | orchestrator | 2026-01-03 01:02:38 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:02:38.473676 | orchestrator | 2026-01-03 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:41.508185 | orchestrator | 2026-01-03 01:02:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:02:41.509056 | orchestrator | 2026-01-03 01:02:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:02:41.509873 | orchestrator | 2026-01-03 01:02:41 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:02:41.511859 | orchestrator | 2026-01-03 01:02:41 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:02:41.511896 | orchestrator | 2026-01-03 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:44.565249 | orchestrator | 2026-01-03 01:02:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:02:44.565733 | orchestrator | 2026-01-03 01:02:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:02:44.566683 | orchestrator | 2026-01-03 01:02:44 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:02:44.567679 | orchestrator | 2026-01-03 01:02:44 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:02:44.567722 | orchestrator | 2026-01-03 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:47.620427 | orchestrator | 2026-01-03 01:02:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:02:47.620989 | orchestrator | 2026-01-03 01:02:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:02:47.622281 | orchestrator | 2026-01-03 01:02:47 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:02:47.623571 | orchestrator | 2026-01-03 01:02:47 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:02:47.623607 | orchestrator | 2026-01-03 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:50.656745 | orchestrator | 2026-01-03 01:02:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:02:50.656937 | orchestrator | 2026-01-03 01:02:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:02:50.657257 | orchestrator | 2026-01-03 01:02:50 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:02:50.657985 | orchestrator | 2026-01-03 01:02:50 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:02:50.658008 | orchestrator | 2026-01-03 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:53.683064 | orchestrator | 2026-01-03 01:02:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:02:53.684998 | orchestrator | 2026-01-03 01:02:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:02:53.685207 | orchestrator | 2026-01-03 01:02:53 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:02:53.687007 | orchestrator | 2026-01-03 01:02:53 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:02:53.687043 | orchestrator | 2026-01-03 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:56.724724 | orchestrator | 2026-01-03 01:02:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:02:56.724912 | orchestrator | 2026-01-03 01:02:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:02:56.726265 | orchestrator | 2026-01-03 01:02:56 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:02:56.727188 | orchestrator | 2026-01-03 01:02:56 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:02:56.727246 | orchestrator | 2026-01-03 01:02:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:02:59.767280 | orchestrator | 2026-01-03 01:02:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:02:59.768241 | orchestrator | 2026-01-03 01:02:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:02:59.770203 | orchestrator | 2026-01-03 01:02:59 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state STARTED 2026-01-03 01:02:59.771794 | orchestrator | 2026-01-03 01:02:59 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:02:59.771840 | orchestrator | 2026-01-03 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:02.824257 | orchestrator | 2026-01-03 01:03:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:03:02.824541 | orchestrator | 2026-01-03 01:03:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:03:02.826281 | orchestrator | 2026-01-03 01:03:02 | INFO  | Task 5878bca4-571e-4d11-8148-5de0297ada58 is in state SUCCESS 2026-01-03 01:03:02.827891 | orchestrator | 2026-01-03 01:03:02 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:03:02.827919 | orchestrator | 2026-01-03 01:03:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:05.862908 | orchestrator | 2026-01-03 01:03:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:03:05.864073 | orchestrator | 2026-01-03 01:03:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:03:05.865081 | orchestrator | 2026-01-03 01:03:05 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:03:05.865134 | orchestrator | 2026-01-03 01:03:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:08.920832 | orchestrator | 2026-01-03 01:03:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:03:08.920918 | orchestrator | 2026-01-03 01:03:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:03:08.921436 | orchestrator | 2026-01-03 01:03:08 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:03:08.921447 | orchestrator | 2026-01-03 01:03:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:11.966320 | orchestrator | 2026-01-03 01:03:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:03:11.968020 | orchestrator | 2026-01-03 01:03:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:03:11.969593 | orchestrator | 2026-01-03 01:03:11 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:03:11.969657 | orchestrator | 2026-01-03 01:03:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:15.031730 | orchestrator | 2026-01-03 01:03:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:03:15.033170 | orchestrator | 2026-01-03 01:03:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:03:15.034758 | orchestrator | 2026-01-03 01:03:15 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:03:15.034819 | orchestrator | 2026-01-03 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:18.096167 | orchestrator | 2026-01-03 01:03:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:03:18.098165 | orchestrator | 2026-01-03 01:03:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:03:18.099616 | orchestrator | 2026-01-03 01:03:18 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:03:18.099663 | orchestrator | 2026-01-03 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:21.154923 | orchestrator | 2026-01-03 01:03:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:03:21.157631 | orchestrator | 2026-01-03 01:03:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:03:21.159754 | orchestrator | 2026-01-03 01:03:21 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:03:21.159826 | orchestrator | 2026-01-03 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:24.218289 | orchestrator | 2026-01-03 01:03:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:03:24.218470 | orchestrator | 2026-01-03 01:03:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:03:24.219941 | orchestrator | 2026-01-03 01:03:24 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:03:24.220005 | orchestrator | 2026-01-03 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:27.297586 | orchestrator | 2026-01-03 01:03:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:03:27.298256 | orchestrator | 2026-01-03 01:03:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:03:27.300326 | orchestrator | 2026-01-03 01:03:27 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:03:27.300372 | orchestrator | 2026-01-03 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:30.345173 | orchestrator | 2026-01-03 01:03:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:03:30.347412 | orchestrator | 2026-01-03 01:03:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:03:30.349203 | orchestrator | 2026-01-03 01:03:30 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:03:30.349283 | orchestrator | 2026-01-03 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:33.396808 | orchestrator | 2026-01-03 01:03:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:03:33.398584 | orchestrator | 2026-01-03 01:03:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:03:33.400215 | orchestrator | 2026-01-03 01:03:33 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:03:33.400388 | orchestrator | 2026-01-03 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:36.443880 | orchestrator | 2026-01-03 01:03:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:03:36.445644 | orchestrator | 2026-01-03 01:03:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:03:36.449323 | orchestrator | 2026-01-03 01:03:36 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:03:36.449372 | orchestrator | 2026-01-03 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:39.499058 | orchestrator | 2026-01-03 01:03:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:03:39.500243 | orchestrator | 2026-01-03 01:03:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:03:39.501257 | orchestrator | 2026-01-03 01:03:39 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:03:39.501305 | orchestrator | 2026-01-03 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:42.554232 | orchestrator | 2026-01-03 01:03:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:03:42.555604 | orchestrator | 2026-01-03 01:03:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:03:42.557169 | orchestrator | 2026-01-03 01:03:42 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:03:42.557275 | orchestrator | 2026-01-03 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:45.606608 | orchestrator | 2026-01-03 01:03:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:03:45.607950 | orchestrator | 2026-01-03 01:03:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:03:45.608988 | orchestrator | 2026-01-03 01:03:45 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:03:45.609026 | orchestrator | 2026-01-03 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:48.647079 | orchestrator | 2026-01-03 01:03:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:03:48.647814 | orchestrator | 2026-01-03 01:03:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:03:48.648824 | orchestrator | 2026-01-03 01:03:48 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:03:48.648886 | orchestrator | 2026-01-03 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:51.694265 | orchestrator | 2026-01-03 01:03:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:03:51.696159 | orchestrator | 2026-01-03 01:03:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:03:51.698163 | orchestrator | 2026-01-03 01:03:51 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:03:51.698385 | orchestrator | 2026-01-03 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:54.752376 | orchestrator | 2026-01-03 01:03:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:03:54.754186 | orchestrator | 2026-01-03 01:03:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:03:54.756723 | orchestrator | 2026-01-03 01:03:54 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:03:54.757028 | orchestrator | 2026-01-03 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:03:57.797213 | orchestrator | 2026-01-03 01:03:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:03:57.797292 | orchestrator | 2026-01-03 01:03:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:03:57.798159 | orchestrator | 2026-01-03 01:03:57 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:03:57.798180 | orchestrator | 2026-01-03 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:00.847619 | orchestrator | 2026-01-03 01:04:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:04:00.850469 | orchestrator | 2026-01-03 01:04:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:04:00.852447 | orchestrator | 2026-01-03 01:04:00 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:04:00.852802 | orchestrator | 2026-01-03 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:03.901537 | orchestrator | 2026-01-03 01:04:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:04:03.903047 | orchestrator | 2026-01-03 01:04:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:04:03.904551 | orchestrator | 2026-01-03 01:04:03 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:04:03.904602 | orchestrator | 2026-01-03 01:04:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:06.988352 | orchestrator | 2026-01-03 01:04:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:04:06.988449 | orchestrator | 2026-01-03 01:04:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:04:06.988459 | orchestrator | 2026-01-03 01:04:06 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:04:06.988466 | orchestrator | 2026-01-03 01:04:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:10.015354 | orchestrator | 2026-01-03 01:04:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:04:10.017102 | orchestrator | 2026-01-03 01:04:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:04:10.018175 | orchestrator | 2026-01-03 01:04:10 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:04:10.018238 | orchestrator | 2026-01-03 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:13.073547 | orchestrator | 2026-01-03 01:04:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:04:13.076086 | orchestrator | 2026-01-03 01:04:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:04:13.077963 | orchestrator | 2026-01-03 01:04:13 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:04:13.078141 | orchestrator | 2026-01-03 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:16.123598 | orchestrator | 2026-01-03 01:04:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:04:16.124717 | orchestrator | 2026-01-03 01:04:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:04:16.126497 | orchestrator | 2026-01-03 01:04:16 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:04:16.126574 | orchestrator | 2026-01-03 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:19.179489 | orchestrator | 2026-01-03 01:04:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:04:19.181098 | orchestrator | 2026-01-03 01:04:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:04:19.182912 | orchestrator | 2026-01-03 01:04:19 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:04:19.182950 | orchestrator | 2026-01-03 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:22.233725 | orchestrator | 2026-01-03 01:04:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:04:22.235593 | orchestrator | 2026-01-03 01:04:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:04:22.237860 | orchestrator | 2026-01-03 01:04:22 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:04:22.237920 | orchestrator | 2026-01-03 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:25.287910 | orchestrator | 2026-01-03 01:04:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:04:25.288642 | orchestrator | 2026-01-03 01:04:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:04:25.290179 | orchestrator | 2026-01-03 01:04:25 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:04:25.290230 | orchestrator | 2026-01-03 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:28.339921 | orchestrator | 2026-01-03 01:04:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:04:28.341069 | orchestrator | 2026-01-03 01:04:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:04:28.342175 | orchestrator | 2026-01-03 01:04:28 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:04:28.342214 | orchestrator | 2026-01-03 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:31.406489 | orchestrator | 2026-01-03 01:04:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:04:31.409525 | orchestrator | 2026-01-03 01:04:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:04:31.411475 | orchestrator | 2026-01-03 01:04:31 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:04:31.411555 | orchestrator | 2026-01-03 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:34.469364 | orchestrator | 2026-01-03 01:04:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:04:34.471817 | orchestrator | 2026-01-03 01:04:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:04:34.474185 | orchestrator | 2026-01-03 01:04:34 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:04:34.474227 | orchestrator | 2026-01-03 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:37.526265 | orchestrator | 2026-01-03 01:04:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:04:37.528277 | orchestrator | 2026-01-03 01:04:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:04:37.531100 | orchestrator | 2026-01-03 01:04:37 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:04:37.531413 | orchestrator | 2026-01-03 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:40.570454 | orchestrator | 2026-01-03 01:04:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:04:40.570902 | orchestrator | 2026-01-03 01:04:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:04:40.572316 | orchestrator | 2026-01-03 01:04:40 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:04:40.572355 | orchestrator | 2026-01-03 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:43.608818 | orchestrator | 2026-01-03 01:04:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:04:43.611303 | orchestrator | 2026-01-03 01:04:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:04:43.613816 | orchestrator | 2026-01-03 01:04:43 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:04:43.613868 | orchestrator | 2026-01-03 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:46.669701 | orchestrator | 2026-01-03 01:04:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:04:46.671965 | orchestrator | 2026-01-03 01:04:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:04:46.675168 | orchestrator | 2026-01-03 01:04:46 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:04:46.675227 | orchestrator | 2026-01-03 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:49.729526 | orchestrator | 2026-01-03 01:04:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:04:49.731476 | orchestrator | 2026-01-03 01:04:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:04:49.732659 | orchestrator | 2026-01-03 01:04:49 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:04:49.732745 | orchestrator | 2026-01-03 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:52.776495 | orchestrator | 2026-01-03 01:04:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:04:52.778778 | orchestrator | 2026-01-03 01:04:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:04:52.780451 | orchestrator | 2026-01-03 01:04:52 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:04:52.780498 | orchestrator | 2026-01-03 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:55.825405 | orchestrator | 2026-01-03 01:04:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:04:55.826742 | orchestrator | 2026-01-03 01:04:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:04:55.828067 | orchestrator | 2026-01-03 01:04:55 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:04:55.828108 | orchestrator | 2026-01-03 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:04:58.870096 | orchestrator | 2026-01-03 01:04:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:04:58.872061 | orchestrator | 2026-01-03 01:04:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:04:58.874931 | orchestrator | 2026-01-03 01:04:58 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:04:58.874969 | orchestrator | 2026-01-03 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:01.918235 | orchestrator | 2026-01-03 01:05:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:05:01.919851 | orchestrator | 2026-01-03 01:05:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:05:01.921405 | orchestrator | 2026-01-03 01:05:01 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:05:01.921567 | orchestrator | 2026-01-03 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:04.972330 | orchestrator | 2026-01-03 01:05:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:05:04.972567 | orchestrator | 2026-01-03 01:05:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:05:04.974877 | orchestrator | 2026-01-03 01:05:04 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:05:04.974986 | orchestrator | 2026-01-03 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:08.017455 | orchestrator | 2026-01-03 01:05:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:05:08.019314 | orchestrator | 2026-01-03 01:05:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:05:08.021098 | orchestrator | 2026-01-03 01:05:08 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:05:08.021140 | orchestrator | 2026-01-03 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:11.067802 | orchestrator | 2026-01-03 01:05:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:05:11.070104 | orchestrator | 2026-01-03 01:05:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:05:11.071121 | orchestrator | 2026-01-03 01:05:11 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:05:11.071439 | orchestrator | 2026-01-03 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:14.124491 | orchestrator | 2026-01-03 01:05:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:05:14.125385 | orchestrator | 2026-01-03 01:05:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:05:14.126418 | orchestrator | 2026-01-03 01:05:14 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:05:14.126494 | orchestrator | 2026-01-03 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:17.178261 | orchestrator | 2026-01-03 01:05:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:05:17.180112 | orchestrator | 2026-01-03 01:05:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:05:17.181376 | orchestrator | 2026-01-03 01:05:17 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:05:17.181518 | orchestrator | 2026-01-03 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:20.229443 | orchestrator | 2026-01-03 01:05:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:05:20.229499 | orchestrator | 2026-01-03 01:05:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:05:20.230250 | orchestrator | 2026-01-03 01:05:20 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:05:20.230276 | orchestrator | 2026-01-03 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:23.275756 | orchestrator | 2026-01-03 01:05:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:05:23.277822 | orchestrator | 2026-01-03 01:05:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:05:23.279544 | orchestrator | 2026-01-03 01:05:23 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state STARTED 2026-01-03 01:05:23.279604 | orchestrator | 2026-01-03 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:26.322853 | orchestrator | 2026-01-03 01:05:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:05:26.324913 | orchestrator | 2026-01-03 01:05:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:05:26.326778 | orchestrator | 2026-01-03 01:05:26 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:05:26.333335 | orchestrator | 2026-01-03 01:05:26.333377 | orchestrator | 2026-01-03 01:05:26.333384 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-01-03 01:05:26.333390 | orchestrator | 2026-01-03 01:05:26.333395 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-01-03 01:05:26.333401 | orchestrator | Saturday 03 January 2026 01:00:24 +0000 (0:00:00.084) 0:00:00.084 ****** 2026-01-03 01:05:26.333406 | orchestrator | changed: [localhost] 2026-01-03 01:05:26.333417 | orchestrator | 2026-01-03 01:05:26.333422 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-01-03 01:05:26.333428 | orchestrator | Saturday 03 January 2026 01:00:25 +0000 (0:00:00.788) 0:00:00.873 ****** 2026-01-03 01:05:26.333433 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-01-03 01:05:26.333439 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2026-01-03 01:05:26.333444 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (1 retries left). 2026-01-03 01:05:26.333449 | orchestrator | changed: [localhost] 2026-01-03 01:05:26.333454 | orchestrator | 2026-01-03 01:05:26.333459 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-01-03 01:05:26.333464 | orchestrator | Saturday 03 January 2026 01:02:11 +0000 (0:01:45.617) 0:01:46.491 ****** 2026-01-03 01:05:26.333469 | orchestrator | changed: [localhost] 2026-01-03 01:05:26.333475 | orchestrator | 2026-01-03 01:05:26.333484 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 01:05:26.333489 | orchestrator | 2026-01-03 01:05:26.333494 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 01:05:26.333499 | orchestrator | Saturday 03 January 2026 01:02:15 +0000 (0:00:04.081) 0:01:50.572 ****** 2026-01-03 01:05:26.333505 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:05:26.333510 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:05:26.333516 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:05:26.333520 | orchestrator | 2026-01-03 01:05:26.333524 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 01:05:26.333527 | orchestrator | Saturday 03 January 2026 01:02:15 +0000 (0:00:00.342) 0:01:50.914 ****** 2026-01-03 01:05:26.333530 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-01-03 01:05:26.333533 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-01-03 01:05:26.333537 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-01-03 01:05:26.333540 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-01-03 01:05:26.333543 | orchestrator | 2026-01-03 01:05:26.333546 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-01-03 01:05:26.333549 | orchestrator | skipping: no hosts matched 2026-01-03 01:05:26.333553 | orchestrator | 2026-01-03 01:05:26.333556 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 01:05:26.333560 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:05:26.333564 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:05:26.333569 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:05:26.333586 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:05:26.333589 | orchestrator | 2026-01-03 01:05:26.333592 | orchestrator | 2026-01-03 01:05:26.333596 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 01:05:26.333599 | orchestrator | Saturday 03 January 2026 01:02:16 +0000 (0:00:00.692) 0:01:51.607 ****** 2026-01-03 01:05:26.333602 | orchestrator | =============================================================================== 2026-01-03 01:05:26.333605 | orchestrator | Download ironic-agent initramfs --------------------------------------- 105.62s 2026-01-03 01:05:26.333609 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.08s 2026-01-03 01:05:26.333612 | orchestrator | Ensure the destination directory exists --------------------------------- 0.79s 2026-01-03 01:05:26.333615 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.69s 2026-01-03 01:05:26.333618 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-01-03 01:05:26.333621 | orchestrator | 2026-01-03 01:05:26.333625 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-03 01:05:26.333628 | orchestrator | 2.16.14 2026-01-03 01:05:26.333631 | orchestrator | 2026-01-03 01:05:26.333634 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-01-03 01:05:26.333637 | orchestrator | 2026-01-03 01:05:26.333640 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-01-03 01:05:26.333644 | orchestrator | Saturday 03 January 2026 01:01:23 +0000 (0:00:00.261) 0:00:00.261 ****** 2026-01-03 01:05:26.333647 | orchestrator | changed: [testbed-manager] 2026-01-03 01:05:26.333650 | orchestrator | 2026-01-03 01:05:26.333653 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-01-03 01:05:26.333658 | orchestrator | Saturday 03 January 2026 01:01:24 +0000 (0:00:01.470) 0:00:01.731 ****** 2026-01-03 01:05:26.333663 | orchestrator | changed: [testbed-manager] 2026-01-03 01:05:26.333668 | orchestrator | 2026-01-03 01:05:26.333673 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-01-03 01:05:26.333679 | orchestrator | Saturday 03 January 2026 01:01:25 +0000 (0:00:01.062) 0:00:02.794 ****** 2026-01-03 01:05:26.333684 | orchestrator | changed: [testbed-manager] 2026-01-03 01:05:26.333690 | orchestrator | 2026-01-03 01:05:26.333695 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-01-03 01:05:26.333716 | orchestrator | Saturday 03 January 2026 01:01:26 +0000 (0:00:01.073) 0:00:03.868 ****** 2026-01-03 01:05:26.333720 | orchestrator | changed: [testbed-manager] 2026-01-03 01:05:26.333723 | orchestrator | 2026-01-03 01:05:26.333727 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-01-03 01:05:26.333730 | orchestrator | Saturday 03 January 2026 01:01:27 +0000 (0:00:01.229) 0:00:05.097 ****** 2026-01-03 01:05:26.333733 | orchestrator | changed: [testbed-manager] 2026-01-03 01:05:26.333736 | orchestrator | 2026-01-03 01:05:26.333739 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-01-03 01:05:26.333743 | orchestrator | Saturday 03 January 2026 01:01:29 +0000 (0:00:01.160) 0:00:06.257 ****** 2026-01-03 01:05:26.333748 | orchestrator | changed: [testbed-manager] 2026-01-03 01:05:26.333754 | orchestrator | 2026-01-03 01:05:26.333789 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-01-03 01:05:26.333796 | orchestrator | Saturday 03 January 2026 01:01:30 +0000 (0:00:01.161) 0:00:07.418 ****** 2026-01-03 01:05:26.333801 | orchestrator | changed: [testbed-manager] 2026-01-03 01:05:26.333805 | orchestrator | 2026-01-03 01:05:26.333808 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-01-03 01:05:26.333812 | orchestrator | Saturday 03 January 2026 01:01:32 +0000 (0:00:02.090) 0:00:09.508 ****** 2026-01-03 01:05:26.333817 | orchestrator | changed: [testbed-manager] 2026-01-03 01:05:26.333852 | orchestrator | 2026-01-03 01:05:26.333864 | orchestrator | TASK [Create admin user] ******************************************************* 2026-01-03 01:05:26.333870 | orchestrator | Saturday 03 January 2026 01:01:33 +0000 (0:00:01.240) 0:00:10.749 ****** 2026-01-03 01:05:26.333875 | orchestrator | changed: [testbed-manager] 2026-01-03 01:05:26.333890 | orchestrator | 2026-01-03 01:05:26.333896 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-01-03 01:05:26.333899 | orchestrator | Saturday 03 January 2026 01:02:36 +0000 (0:01:02.843) 0:01:13.592 ****** 2026-01-03 01:05:26.333902 | orchestrator | skipping: [testbed-manager] 2026-01-03 01:05:26.333905 | orchestrator | 2026-01-03 01:05:26.333908 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-03 01:05:26.333911 | orchestrator | 2026-01-03 01:05:26.333928 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-03 01:05:26.333932 | orchestrator | Saturday 03 January 2026 01:02:36 +0000 (0:00:00.171) 0:01:13.764 ****** 2026-01-03 01:05:26.333935 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:05:26.333938 | orchestrator | 2026-01-03 01:05:26.333941 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-03 01:05:26.333946 | orchestrator | 2026-01-03 01:05:26.333951 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-03 01:05:26.333957 | orchestrator | Saturday 03 January 2026 01:02:38 +0000 (0:00:01.969) 0:01:15.733 ****** 2026-01-03 01:05:26.333991 | orchestrator | changed: [testbed-node-1] 2026-01-03 01:05:26.333995 | orchestrator | 2026-01-03 01:05:26.334000 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-03 01:05:26.334006 | orchestrator | 2026-01-03 01:05:26.334031 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-03 01:05:26.334037 | orchestrator | Saturday 03 January 2026 01:02:49 +0000 (0:00:11.387) 0:01:27.121 ****** 2026-01-03 01:05:26.334042 | orchestrator | changed: [testbed-node-2] 2026-01-03 01:05:26.334048 | orchestrator | 2026-01-03 01:05:26.334053 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 01:05:26.334059 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-03 01:05:26.334066 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:05:26.334071 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:05:26.334077 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-03 01:05:26.334082 | orchestrator | 2026-01-03 01:05:26.334088 | orchestrator | 2026-01-03 01:05:26.334094 | orchestrator | 2026-01-03 01:05:26.334099 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 01:05:26.334104 | orchestrator | Saturday 03 January 2026 01:03:01 +0000 (0:00:11.252) 0:01:38.374 ****** 2026-01-03 01:05:26.334108 | orchestrator | =============================================================================== 2026-01-03 01:05:26.334112 | orchestrator | Create admin user ------------------------------------------------------ 62.84s 2026-01-03 01:05:26.334116 | orchestrator | Restart ceph manager service ------------------------------------------- 24.61s 2026-01-03 01:05:26.334119 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.09s 2026-01-03 01:05:26.334123 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.47s 2026-01-03 01:05:26.334127 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.24s 2026-01-03 01:05:26.334131 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.23s 2026-01-03 01:05:26.334135 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.16s 2026-01-03 01:05:26.334138 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.16s 2026-01-03 01:05:26.334146 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.07s 2026-01-03 01:05:26.334150 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.06s 2026-01-03 01:05:26.334154 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.17s 2026-01-03 01:05:26.334160 | orchestrator | 2026-01-03 01:05:26.334165 | orchestrator | 2026-01-03 01:05:26.334170 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 01:05:26.334176 | orchestrator | 2026-01-03 01:05:26.334185 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 01:05:26.334194 | orchestrator | Saturday 03 January 2026 01:02:19 +0000 (0:00:00.268) 0:00:00.268 ****** 2026-01-03 01:05:26.334200 | orchestrator | ok: [testbed-manager] 2026-01-03 01:05:26.334206 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:05:26.334211 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:05:26.334216 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:05:26.334222 | orchestrator | ok: [testbed-node-3] 2026-01-03 01:05:26.334227 | orchestrator | ok: [testbed-node-4] 2026-01-03 01:05:26.334232 | orchestrator | ok: [testbed-node-5] 2026-01-03 01:05:26.334237 | orchestrator | 2026-01-03 01:05:26.334243 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 01:05:26.334248 | orchestrator | Saturday 03 January 2026 01:02:19 +0000 (0:00:00.772) 0:00:01.041 ****** 2026-01-03 01:05:26.334255 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-01-03 01:05:26.334259 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-01-03 01:05:26.334263 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-01-03 01:05:26.334266 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-01-03 01:05:26.334269 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-01-03 01:05:26.334272 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-01-03 01:05:26.334275 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-01-03 01:05:26.334278 | orchestrator | 2026-01-03 01:05:26.334282 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-01-03 01:05:26.334285 | orchestrator | 2026-01-03 01:05:26.334288 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-03 01:05:26.334291 | orchestrator | Saturday 03 January 2026 01:02:20 +0000 (0:00:00.643) 0:00:01.685 ****** 2026-01-03 01:05:26.334294 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 01:05:26.334298 | orchestrator | 2026-01-03 01:05:26.334301 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-01-03 01:05:26.334304 | orchestrator | Saturday 03 January 2026 01:02:21 +0000 (0:00:01.293) 0:00:02.979 ****** 2026-01-03 01:05:26.334309 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-03 01:05:26.334314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.334320 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.334324 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.334332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.334336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.334339 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.334343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.334346 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.334349 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.334355 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.334358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.334366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.334370 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-03 01:05:26.334375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.334378 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.334385 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.334388 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.334391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.334398 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.334402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.334405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.334408 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.334411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.334417 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.334420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.334423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.334431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.334435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.334438 | orchestrator | 2026-01-03 01:05:26.334441 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-03 01:05:26.334444 | orchestrator | Saturday 03 January 2026 01:02:24 +0000 (0:00:02.903) 0:00:05.882 ****** 2026-01-03 01:05:26.334448 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-03 01:05:26.334451 | orchestrator | 2026-01-03 01:05:26.334454 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-01-03 01:05:26.334457 | orchestrator | Saturday 03 January 2026 01:02:26 +0000 (0:00:01.334) 0:00:07.216 ****** 2026-01-03 01:05:26.334461 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-03 01:05:26.334469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.334495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.334499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.334524 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.334534 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.334540 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.334545 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.334555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.334559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.334562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.334565 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.334570 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.334576 | orchestrator | changed:2026-01-03 01:05:26 | INFO  | Task 51f53e1c-656b-4584-8586-4d99c7050fff is in state SUCCESS 2026-01-03 01:05:26.334580 | orchestrator | [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.334583 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.334589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.334592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.334595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.334599 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.334602 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.334609 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.334613 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-03 01:05:26.334619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.334622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.334626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.334629 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.334632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.334639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.334643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.334646 | orchestrator | 2026-01-03 01:05:26.334656 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-01-03 01:05:26.334660 | orchestrator | Saturday 03 January 2026 01:02:31 +0000 (0:00:05.407) 0:00:12.624 ****** 2026-01-03 01:05:26.334663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:05:26.334666 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-03 01:05:26.334670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:05:26.334673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:05:26.334676 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:05:26.334683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:05:26.334686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:05:26.334708 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:05:26.334712 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-03 01:05:26.334715 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:05:26.334723 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:05:26.334726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:05:26.334729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:05:26.334737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:05:26.334740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:05:26.334746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:05:26.334750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:05:26.334753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:05:26.334756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:05:26.334760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:05:26.334763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:05:26.334766 | orchestrator | skipping: [testbed-manager] 2026-01-03 01:05:26.334770 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:05:26.334773 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:05:26.334780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:05:26.334786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:05:26.334789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 01:05:26.334793 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:05:26.334796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:05:26.334799 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:05:26.334803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 01:05:26.334810 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:05:26.334813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:05:26.334817 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:05:26.334850 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 01:05:26.334854 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:05:26.334857 | orchestrator | 2026-01-03 01:05:26.334861 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-01-03 01:05:26.334864 | orchestrator | Saturday 03 January 2026 01:02:33 +0000 (0:00:01.582) 0:00:14.206 ****** 2026-01-03 01:05:26.334867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:05:26.334871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:05:26.334874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:05:26.334904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:05:26.334911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:05:26.334917 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-03 01:05:26.334931 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:05:26.334937 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:05:26.334943 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-03 01:05:26.334948 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:05:26.334951 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:05:26.334955 | orchestrator | skipping: [testbed-manager] 2026-01-03 01:05:26.334958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:05:26.334961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:05:26.334967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:05:26.335146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:05:26.335159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:05:26.335166 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:05:26.335171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:05:26.335176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:05:26.335182 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 01:05:26.335188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:05:26.335198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:05:26.335204 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:05:26.335212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:05:26.335222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:05:26.335227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-03 01:05:26.335232 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:05:26.335237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:05:26.335242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:05:26.335247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 01:05:26.335252 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:05:26.335261 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-03 01:05:26.335266 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-03 01:05:26.335277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-03 01:05:26.335282 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:05:26.335301 | orchestrator | 2026-01-03 01:05:26.335307 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-01-03 01:05:26.335313 | orchestrator | Saturday 03 January 2026 01:02:34 +0000 (0:00:01.930) 0:00:16.137 ****** 2026-01-03 01:05:26.335319 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-03 01:05:26.335326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.335332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.335337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.335347 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.335352 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.335361 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.335369 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.335375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.335412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.335418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.335424 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.335434 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.335445 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.335460 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.335474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.335481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.335487 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-03 01:05:26.335497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.335503 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.335509 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.335530 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.335548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.335554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.335560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.335571 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.335579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.335585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.335597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.335603 | orchestrator | 2026-01-03 01:05:26.335609 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-01-03 01:05:26.335615 | orchestrator | Saturday 03 January 2026 01:02:41 +0000 (0:00:06.153) 0:00:22.290 ****** 2026-01-03 01:05:26.335655 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-03 01:05:26.335662 | orchestrator | 2026-01-03 01:05:26.335668 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-01-03 01:05:26.335676 | orchestrator | Saturday 03 January 2026 01:02:42 +0000 (0:00:01.335) 0:00:23.625 ****** 2026-01-03 01:05:26.335685 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1109142, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.994508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335691 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1109142, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.994508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335697 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1109142, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.994508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335707 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1109171, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399498.0002468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335713 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1109171, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399498.0002468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335719 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1109142, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.994508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335728 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1109142, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.994508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335737 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1109171, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399498.0002468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335742 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1109171, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399498.0002468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335751 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1109131, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9927077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335757 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1109131, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9927077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335762 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1109142, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.994508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335768 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1109171, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399498.0002468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335776 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1109131, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9927077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335784 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1109142, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.994508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 01:05:26.335790 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1109131, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9927077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335799 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1109154, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9963264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335804 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1109154, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9963264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335823 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1109171, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399498.0002468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335831 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1109126, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.989308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335839 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1109154, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9963264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335848 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1109131, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9927077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335854 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1109154, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9963264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335863 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1109131, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9927077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335869 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1109126, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.989308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335875 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1109126, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.989308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335891 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1109144, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9947484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335897 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1109154, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9963264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335942 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1109154, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9963264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335950 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1109126, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.989308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335959 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1109126, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.989308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335964 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1109144, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9947484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335987 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1109144, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9947484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335994 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1109152, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9958985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.335999 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1109144, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9947484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336010 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1109152, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9958985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336020 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1109126, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.989308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336026 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1109152, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9958985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336032 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1109171, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399498.0002468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 01:05:26.336038 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1109144, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9947484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336043 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1109144, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9947484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336049 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1109152, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9958985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336061 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1109147, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9950562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336072 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1109147, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9950562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336078 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1109147, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9950562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336084 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1109147, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9950562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336090 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1109152, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9958985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336095 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1109152, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9958985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336101 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1109140, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9938374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336112 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1109140, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9938374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336121 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1109140, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9938374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336127 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1109140, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9938374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336133 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109164, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9993322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336139 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1109147, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9950562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336145 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109164, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9993322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336150 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1109147, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9950562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336165 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109121, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9886835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336171 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1109131, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9927077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 01:05:26.336177 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109121, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9886835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336183 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109164, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9993322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336189 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1109140, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9938374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336196 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1109140, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9938374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336202 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109164, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9993322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336217 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1109184, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399498.0048754, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336223 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1109184, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399498.0048754, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336230 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109164, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9993322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336236 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109121, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9886835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336242 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1109160, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9971542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336248 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109121, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9886835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336254 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109164, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9993322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336508 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1109184, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399498.0048754, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336522 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1109160, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9971542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336529 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109128, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.989308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336535 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109121, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9886835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336541 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1109160, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9971542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336547 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1109184, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399498.0048754, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336554 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1109154, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9963264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 01:05:26.336572 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109121, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9886835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336578 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1109124, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9891205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336584 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109128, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.989308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336590 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1109184, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399498.0048754, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336596 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1109151, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9956686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336602 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1109184, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399498.0048754, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336611 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1109160, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9971542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336622 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109128, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.989308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336628 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1109149, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.995333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336634 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1109124, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9891205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336640 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1109160, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9971542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336647 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1109181, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399498.0038252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336653 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:05:26.336659 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1109160, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9971542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336669 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109128, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.989308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336681 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1109124, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9891205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336687 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1109151, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9956686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336692 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109128, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.989308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336698 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1109124, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9891205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336704 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1109151, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9956686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336711 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109128, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.989308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336722 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1109124, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9891205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336734 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1109151, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9956686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336741 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1109149, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.995333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336747 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1109149, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.995333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336753 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1109124, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9891205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336759 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1109151, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9956686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336768 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1109181, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399498.0038252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336774 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:05:26.336780 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1109126, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.989308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 01:05:26.336790 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1109149, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.995333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336797 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1109151, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9956686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336802 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1109181, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399498.0038252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336807 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:05:26.336812 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1109181, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399498.0038252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336818 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:05:26.336822 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1109149, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.995333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336831 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1109149, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.995333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336837 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1109181, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399498.0038252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336842 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:05:26.336853 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1109181, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399498.0038252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-03 01:05:26.336858 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:05:26.336863 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1109144, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9947484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 01:05:26.336868 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1109152, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9958985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 01:05:26.336873 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1109147, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9950562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 01:05:26.336954 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1109140, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9938374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 01:05:26.336964 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109164, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9993322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 01:05:26.336969 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109121, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9886835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 01:05:26.336982 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1109184, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399498.0048754, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 01:05:26.336988 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1109160, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9971542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 01:05:26.336994 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1109128, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.989308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 01:05:26.336999 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1109124, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9891205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 01:05:26.337009 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1109151, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9956686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 01:05:26.337014 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1109149, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.995333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 01:05:26.337018 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1109181, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399498.0038252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-03 01:05:26.337021 | orchestrator | 2026-01-03 01:05:26.337024 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-01-03 01:05:26.337028 | orchestrator | Saturday 03 January 2026 01:03:07 +0000 (0:00:25.181) 0:00:48.807 ****** 2026-01-03 01:05:26.337031 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-03 01:05:26.337034 | orchestrator | 2026-01-03 01:05:26.337039 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-01-03 01:05:26.337043 | orchestrator | Saturday 03 January 2026 01:03:08 +0000 (0:00:00.713) 0:00:49.521 ****** 2026-01-03 01:05:26.337046 | orchestrator | [WARNING]: Skipped 2026-01-03 01:05:26.337052 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:05:26.337055 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-01-03 01:05:26.337058 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:05:26.337062 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-01-03 01:05:26.337065 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-03 01:05:26.337068 | orchestrator | [WARNING]: Skipped 2026-01-03 01:05:26.337071 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:05:26.337074 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-01-03 01:05:26.337077 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:05:26.337081 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-01-03 01:05:26.337084 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-03 01:05:26.337087 | orchestrator | [WARNING]: Skipped 2026-01-03 01:05:26.337090 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:05:26.337093 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-01-03 01:05:26.337096 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:05:26.337102 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-01-03 01:05:26.337105 | orchestrator | [WARNING]: Skipped 2026-01-03 01:05:26.337109 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:05:26.337112 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-01-03 01:05:26.337115 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:05:26.337118 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-01-03 01:05:26.337122 | orchestrator | [WARNING]: Skipped 2026-01-03 01:05:26.337126 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:05:26.337129 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-01-03 01:05:26.337133 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:05:26.337136 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-01-03 01:05:26.337140 | orchestrator | [WARNING]: Skipped 2026-01-03 01:05:26.337144 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:05:26.337148 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-01-03 01:05:26.337152 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:05:26.337155 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-01-03 01:05:26.337159 | orchestrator | [WARNING]: Skipped 2026-01-03 01:05:26.337163 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:05:26.337166 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-01-03 01:05:26.337170 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-03 01:05:26.337174 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-01-03 01:05:26.337178 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-03 01:05:26.337181 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-03 01:05:26.337185 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-03 01:05:26.337189 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-03 01:05:26.337193 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-03 01:05:26.337196 | orchestrator | 2026-01-03 01:05:26.337200 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-01-03 01:05:26.337204 | orchestrator | Saturday 03 January 2026 01:03:10 +0000 (0:00:01.753) 0:00:51.274 ****** 2026-01-03 01:05:26.337209 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-03 01:05:26.337215 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-03 01:05:26.337220 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:05:26.337225 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:05:26.337231 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-03 01:05:26.337237 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:05:26.337243 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-03 01:05:26.337248 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:05:26.337254 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-03 01:05:26.337259 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:05:26.337263 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-03 01:05:26.337267 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:05:26.337270 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-01-03 01:05:26.337274 | orchestrator | 2026-01-03 01:05:26.337278 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-01-03 01:05:26.337284 | orchestrator | Saturday 03 January 2026 01:03:24 +0000 (0:00:14.562) 0:01:05.836 ****** 2026-01-03 01:05:26.337290 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-03 01:05:26.337294 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-03 01:05:26.337298 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:05:26.337304 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-03 01:05:26.337308 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:05:26.337312 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:05:26.337315 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-03 01:05:26.337319 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:05:26.337323 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-03 01:05:26.337327 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:05:26.337331 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-03 01:05:26.337334 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:05:26.337338 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-01-03 01:05:26.337342 | orchestrator | 2026-01-03 01:05:26.337345 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-01-03 01:05:26.337349 | orchestrator | Saturday 03 January 2026 01:03:27 +0000 (0:00:02.717) 0:01:08.553 ****** 2026-01-03 01:05:26.337353 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-03 01:05:26.337357 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-03 01:05:26.337361 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-03 01:05:26.337365 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:05:26.337368 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:05:26.337372 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:05:26.337376 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-03 01:05:26.337380 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:05:26.337383 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-01-03 01:05:26.337387 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-03 01:05:26.337391 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:05:26.337394 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-03 01:05:26.337398 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:05:26.337402 | orchestrator | 2026-01-03 01:05:26.337406 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-01-03 01:05:26.337410 | orchestrator | Saturday 03 January 2026 01:03:28 +0000 (0:00:01.596) 0:01:10.150 ****** 2026-01-03 01:05:26.337413 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-03 01:05:26.337417 | orchestrator | 2026-01-03 01:05:26.337421 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-01-03 01:05:26.337425 | orchestrator | Saturday 03 January 2026 01:03:29 +0000 (0:00:00.733) 0:01:10.883 ****** 2026-01-03 01:05:26.337428 | orchestrator | skipping: [testbed-manager] 2026-01-03 01:05:26.337432 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:05:26.337436 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:05:26.337442 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:05:26.337446 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:05:26.337449 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:05:26.337453 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:05:26.337457 | orchestrator | 2026-01-03 01:05:26.337461 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-01-03 01:05:26.337465 | orchestrator | Saturday 03 January 2026 01:03:30 +0000 (0:00:00.694) 0:01:11.578 ****** 2026-01-03 01:05:26.337468 | orchestrator | skipping: [testbed-manager] 2026-01-03 01:05:26.337472 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:05:26.337476 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:05:26.337480 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:05:26.337483 | orchestrator | changed: [testbed-node-1] 2026-01-03 01:05:26.337487 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:05:26.337491 | orchestrator | changed: [testbed-node-2] 2026-01-03 01:05:26.337494 | orchestrator | 2026-01-03 01:05:26.337498 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-01-03 01:05:26.337501 | orchestrator | Saturday 03 January 2026 01:03:32 +0000 (0:00:01.979) 0:01:13.558 ****** 2026-01-03 01:05:26.337506 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-03 01:05:26.337510 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-03 01:05:26.337514 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:05:26.337517 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-03 01:05:26.337521 | orchestrator | skipping: [testbed-manager] 2026-01-03 01:05:26.337525 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:05:26.337529 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-03 01:05:26.337533 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:05:26.337538 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-03 01:05:26.337542 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:05:26.337546 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-03 01:05:26.337552 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:05:26.337556 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-03 01:05:26.337560 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:05:26.337563 | orchestrator | 2026-01-03 01:05:26.337566 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-01-03 01:05:26.337569 | orchestrator | Saturday 03 January 2026 01:03:33 +0000 (0:00:01.615) 0:01:15.173 ****** 2026-01-03 01:05:26.337573 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-03 01:05:26.337576 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-03 01:05:26.337579 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:05:26.337582 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-03 01:05:26.337585 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:05:26.337589 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:05:26.337592 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-03 01:05:26.337595 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:05:26.337598 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-03 01:05:26.337601 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:05:26.337604 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-03 01:05:26.337611 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:05:26.337615 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-01-03 01:05:26.337618 | orchestrator | 2026-01-03 01:05:26.337621 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-01-03 01:05:26.337624 | orchestrator | Saturday 03 January 2026 01:03:35 +0000 (0:00:01.362) 0:01:16.535 ****** 2026-01-03 01:05:26.337627 | orchestrator | [WARNING]: Skipped 2026-01-03 01:05:26.337630 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-01-03 01:05:26.337633 | orchestrator | due to this access issue: 2026-01-03 01:05:26.337637 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-01-03 01:05:26.337640 | orchestrator | not a directory 2026-01-03 01:05:26.337643 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-03 01:05:26.337646 | orchestrator | 2026-01-03 01:05:26.337649 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-01-03 01:05:26.337652 | orchestrator | Saturday 03 January 2026 01:03:36 +0000 (0:00:01.114) 0:01:17.649 ****** 2026-01-03 01:05:26.337655 | orchestrator | skipping: [testbed-manager] 2026-01-03 01:05:26.337658 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:05:26.337662 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:05:26.337665 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:05:26.337668 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:05:26.337671 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:05:26.337674 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:05:26.337677 | orchestrator | 2026-01-03 01:05:26.337680 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-01-03 01:05:26.337683 | orchestrator | Saturday 03 January 2026 01:03:37 +0000 (0:00:00.897) 0:01:18.546 ****** 2026-01-03 01:05:26.337686 | orchestrator | skipping: [testbed-manager] 2026-01-03 01:05:26.337690 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:05:26.337693 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:05:26.337696 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:05:26.337699 | orchestrator | skipping: [testbed-node-3] 2026-01-03 01:05:26.337702 | orchestrator | skipping: [testbed-node-4] 2026-01-03 01:05:26.337705 | orchestrator | skipping: [testbed-node-5] 2026-01-03 01:05:26.337708 | orchestrator | 2026-01-03 01:05:26.337711 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-01-03 01:05:26.337715 | orchestrator | Saturday 03 January 2026 01:03:38 +0000 (0:00:00.839) 0:01:19.386 ****** 2026-01-03 01:05:26.337718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.337724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.337730 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-03 01:05:26.337736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.337739 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.337742 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.337746 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.337749 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-03 01:05:26.337753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.337759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.337765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.337769 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.337772 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.337775 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.337778 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.337782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.337786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.337793 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-03 01:05:26.337799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.337803 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.337806 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.337809 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.337813 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.337816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.337822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.337828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-03 01:05:26.337831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.337834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.337838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-03 01:05:26.337841 | orchestrator | 2026-01-03 01:05:26.337844 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-01-03 01:05:26.337847 | orchestrator | Saturday 03 January 2026 01:03:42 +0000 (0:00:04.267) 0:01:23.654 ****** 2026-01-03 01:05:26.337851 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-03 01:05:26.337854 | orchestrator | skipping: [testbed-manager] 2026-01-03 01:05:26.337857 | orchestrator | 2026-01-03 01:05:26.337860 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-03 01:05:26.337863 | orchestrator | Saturday 03 January 2026 01:03:43 +0000 (0:00:01.171) 0:01:24.825 ****** 2026-01-03 01:05:26.337867 | orchestrator | 2026-01-03 01:05:26.337870 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-03 01:05:26.337873 | orchestrator | Saturday 03 January 2026 01:03:43 +0000 (0:00:00.068) 0:01:24.893 ****** 2026-01-03 01:05:26.337876 | orchestrator | 2026-01-03 01:05:26.337879 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-03 01:05:26.337894 | orchestrator | Saturday 03 January 2026 01:03:43 +0000 (0:00:00.069) 0:01:24.963 ****** 2026-01-03 01:05:26.337899 | orchestrator | 2026-01-03 01:05:26.337904 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-03 01:05:26.337910 | orchestrator | Saturday 03 January 2026 01:03:43 +0000 (0:00:00.061) 0:01:25.024 ****** 2026-01-03 01:05:26.337916 | orchestrator | 2026-01-03 01:05:26.337919 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-03 01:05:26.337922 | orchestrator | Saturday 03 January 2026 01:03:44 +0000 (0:00:00.257) 0:01:25.282 ****** 2026-01-03 01:05:26.337925 | orchestrator | 2026-01-03 01:05:26.337928 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-03 01:05:26.337931 | orchestrator | Saturday 03 January 2026 01:03:44 +0000 (0:00:00.063) 0:01:25.346 ****** 2026-01-03 01:05:26.337934 | orchestrator | 2026-01-03 01:05:26.337938 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-03 01:05:26.337941 | orchestrator | Saturday 03 January 2026 01:03:44 +0000 (0:00:00.064) 0:01:25.411 ****** 2026-01-03 01:05:26.337944 | orchestrator | 2026-01-03 01:05:26.337947 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-01-03 01:05:26.337950 | orchestrator | Saturday 03 January 2026 01:03:44 +0000 (0:00:00.086) 0:01:25.498 ****** 2026-01-03 01:05:26.337953 | orchestrator | changed: [testbed-manager] 2026-01-03 01:05:26.337956 | orchestrator | 2026-01-03 01:05:26.337959 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-01-03 01:05:26.337964 | orchestrator | Saturday 03 January 2026 01:04:04 +0000 (0:00:20.386) 0:01:45.884 ****** 2026-01-03 01:05:26.337967 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:05:26.337971 | orchestrator | changed: [testbed-manager] 2026-01-03 01:05:26.337974 | orchestrator | changed: [testbed-node-2] 2026-01-03 01:05:26.337977 | orchestrator | changed: [testbed-node-5] 2026-01-03 01:05:26.337982 | orchestrator | changed: [testbed-node-3] 2026-01-03 01:05:26.337985 | orchestrator | changed: [testbed-node-1] 2026-01-03 01:05:26.337988 | orchestrator | changed: [testbed-node-4] 2026-01-03 01:05:26.337991 | orchestrator | 2026-01-03 01:05:26.337994 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-01-03 01:05:26.337997 | orchestrator | Saturday 03 January 2026 01:04:18 +0000 (0:00:13.730) 0:01:59.615 ****** 2026-01-03 01:05:26.338000 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:05:26.338004 | orchestrator | changed: [testbed-node-2] 2026-01-03 01:05:26.338007 | orchestrator | changed: [testbed-node-1] 2026-01-03 01:05:26.338010 | orchestrator | 2026-01-03 01:05:26.338038 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-01-03 01:05:26.338042 | orchestrator | Saturday 03 January 2026 01:04:28 +0000 (0:00:10.161) 0:02:09.776 ****** 2026-01-03 01:05:26.338045 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:05:26.338048 | orchestrator | changed: [testbed-node-1] 2026-01-03 01:05:26.338052 | orchestrator | changed: [testbed-node-2] 2026-01-03 01:05:26.338055 | orchestrator | 2026-01-03 01:05:26.338058 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-01-03 01:05:26.338061 | orchestrator | Saturday 03 January 2026 01:04:38 +0000 (0:00:10.005) 0:02:19.782 ****** 2026-01-03 01:05:26.338064 | orchestrator | changed: [testbed-node-1] 2026-01-03 01:05:26.338067 | orchestrator | changed: [testbed-node-2] 2026-01-03 01:05:26.338070 | orchestrator | changed: [testbed-node-5] 2026-01-03 01:05:26.338073 | orchestrator | changed: [testbed-node-3] 2026-01-03 01:05:26.338076 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:05:26.338080 | orchestrator | changed: [testbed-manager] 2026-01-03 01:05:26.338083 | orchestrator | changed: [testbed-node-4] 2026-01-03 01:05:26.338086 | orchestrator | 2026-01-03 01:05:26.338089 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-01-03 01:05:26.338092 | orchestrator | Saturday 03 January 2026 01:04:51 +0000 (0:00:12.740) 0:02:32.522 ****** 2026-01-03 01:05:26.338095 | orchestrator | changed: [testbed-manager] 2026-01-03 01:05:26.338098 | orchestrator | 2026-01-03 01:05:26.338101 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-01-03 01:05:26.338105 | orchestrator | Saturday 03 January 2026 01:05:04 +0000 (0:00:12.871) 0:02:45.394 ****** 2026-01-03 01:05:26.338108 | orchestrator | changed: [testbed-node-1] 2026-01-03 01:05:26.338113 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:05:26.338116 | orchestrator | changed: [testbed-node-2] 2026-01-03 01:05:26.338119 | orchestrator | 2026-01-03 01:05:26.338122 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-01-03 01:05:26.338126 | orchestrator | Saturday 03 January 2026 01:05:08 +0000 (0:00:04.094) 0:02:49.488 ****** 2026-01-03 01:05:26.338129 | orchestrator | changed: [testbed-manager] 2026-01-03 01:05:26.338132 | orchestrator | 2026-01-03 01:05:26.338135 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-01-03 01:05:26.338138 | orchestrator | Saturday 03 January 2026 01:05:13 +0000 (0:00:04.967) 0:02:54.455 ****** 2026-01-03 01:05:26.338141 | orchestrator | changed: [testbed-node-4] 2026-01-03 01:05:26.338144 | orchestrator | changed: [testbed-node-5] 2026-01-03 01:05:26.338147 | orchestrator | changed: [testbed-node-3] 2026-01-03 01:05:26.338150 | orchestrator | 2026-01-03 01:05:26.338153 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 01:05:26.338157 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-03 01:05:26.338160 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-03 01:05:26.338163 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-03 01:05:26.338167 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-03 01:05:26.338170 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-03 01:05:26.338173 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-03 01:05:26.338176 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-03 01:05:26.338179 | orchestrator | 2026-01-03 01:05:26.338182 | orchestrator | 2026-01-03 01:05:26.338185 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 01:05:26.338189 | orchestrator | Saturday 03 January 2026 01:05:24 +0000 (0:00:10.744) 0:03:05.200 ****** 2026-01-03 01:05:26.338192 | orchestrator | =============================================================================== 2026-01-03 01:05:26.338195 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 25.18s 2026-01-03 01:05:26.338198 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 20.39s 2026-01-03 01:05:26.338201 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 14.56s 2026-01-03 01:05:26.338204 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.73s 2026-01-03 01:05:26.338207 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 12.87s 2026-01-03 01:05:26.338212 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 12.74s 2026-01-03 01:05:26.338215 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.74s 2026-01-03 01:05:26.338220 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.16s 2026-01-03 01:05:26.338224 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.01s 2026-01-03 01:05:26.338227 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.15s 2026-01-03 01:05:26.338230 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.41s 2026-01-03 01:05:26.338233 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.97s 2026-01-03 01:05:26.338238 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.27s 2026-01-03 01:05:26.338241 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 4.09s 2026-01-03 01:05:26.338244 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.90s 2026-01-03 01:05:26.338247 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.72s 2026-01-03 01:05:26.338251 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 1.98s 2026-01-03 01:05:26.338254 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 1.93s 2026-01-03 01:05:26.338257 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.75s 2026-01-03 01:05:26.338260 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.62s 2026-01-03 01:05:26.338263 | orchestrator | 2026-01-03 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:29.391972 | orchestrator | 2026-01-03 01:05:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:05:29.394057 | orchestrator | 2026-01-03 01:05:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:05:29.396297 | orchestrator | 2026-01-03 01:05:29 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:05:29.396422 | orchestrator | 2026-01-03 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:32.439307 | orchestrator | 2026-01-03 01:05:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:05:32.439618 | orchestrator | 2026-01-03 01:05:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:05:32.440978 | orchestrator | 2026-01-03 01:05:32 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:05:32.441337 | orchestrator | 2026-01-03 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:35.493442 | orchestrator | 2026-01-03 01:05:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:05:35.496365 | orchestrator | 2026-01-03 01:05:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:05:35.497794 | orchestrator | 2026-01-03 01:05:35 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:05:35.498515 | orchestrator | 2026-01-03 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:38.537663 | orchestrator | 2026-01-03 01:05:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:05:38.539722 | orchestrator | 2026-01-03 01:05:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:05:38.541586 | orchestrator | 2026-01-03 01:05:38 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:05:38.541664 | orchestrator | 2026-01-03 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:41.580989 | orchestrator | 2026-01-03 01:05:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:05:41.583572 | orchestrator | 2026-01-03 01:05:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:05:41.586751 | orchestrator | 2026-01-03 01:05:41 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:05:41.586820 | orchestrator | 2026-01-03 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:44.633230 | orchestrator | 2026-01-03 01:05:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:05:44.634504 | orchestrator | 2026-01-03 01:05:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:05:44.636121 | orchestrator | 2026-01-03 01:05:44 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:05:44.636178 | orchestrator | 2026-01-03 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:47.682440 | orchestrator | 2026-01-03 01:05:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:05:47.684043 | orchestrator | 2026-01-03 01:05:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:05:47.686211 | orchestrator | 2026-01-03 01:05:47 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:05:47.686259 | orchestrator | 2026-01-03 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:50.735959 | orchestrator | 2026-01-03 01:05:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:05:50.737231 | orchestrator | 2026-01-03 01:05:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:05:50.739837 | orchestrator | 2026-01-03 01:05:50 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:05:50.739894 | orchestrator | 2026-01-03 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:53.781957 | orchestrator | 2026-01-03 01:05:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:05:53.784022 | orchestrator | 2026-01-03 01:05:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:05:53.786240 | orchestrator | 2026-01-03 01:05:53 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:05:53.786278 | orchestrator | 2026-01-03 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:56.824601 | orchestrator | 2026-01-03 01:05:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:05:56.825419 | orchestrator | 2026-01-03 01:05:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:05:56.826549 | orchestrator | 2026-01-03 01:05:56 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:05:56.826579 | orchestrator | 2026-01-03 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:05:59.868852 | orchestrator | 2026-01-03 01:05:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:05:59.870154 | orchestrator | 2026-01-03 01:05:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:05:59.872163 | orchestrator | 2026-01-03 01:05:59 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:05:59.872687 | orchestrator | 2026-01-03 01:05:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:02.917484 | orchestrator | 2026-01-03 01:06:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:06:02.919556 | orchestrator | 2026-01-03 01:06:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:06:02.922270 | orchestrator | 2026-01-03 01:06:02 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:06:02.922755 | orchestrator | 2026-01-03 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:05.960551 | orchestrator | 2026-01-03 01:06:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:06:05.962111 | orchestrator | 2026-01-03 01:06:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:06:05.962916 | orchestrator | 2026-01-03 01:06:05 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:06:05.962984 | orchestrator | 2026-01-03 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:09.019959 | orchestrator | 2026-01-03 01:06:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:06:09.023510 | orchestrator | 2026-01-03 01:06:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:06:09.024972 | orchestrator | 2026-01-03 01:06:09 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:06:09.025008 | orchestrator | 2026-01-03 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:12.080724 | orchestrator | 2026-01-03 01:06:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:06:12.082634 | orchestrator | 2026-01-03 01:06:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:06:12.084157 | orchestrator | 2026-01-03 01:06:12 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:06:12.084223 | orchestrator | 2026-01-03 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:15.129261 | orchestrator | 2026-01-03 01:06:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:06:15.133743 | orchestrator | 2026-01-03 01:06:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:06:15.138422 | orchestrator | 2026-01-03 01:06:15 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:06:15.139386 | orchestrator | 2026-01-03 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:18.190482 | orchestrator | 2026-01-03 01:06:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:06:18.192492 | orchestrator | 2026-01-03 01:06:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:06:18.194133 | orchestrator | 2026-01-03 01:06:18 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:06:18.194163 | orchestrator | 2026-01-03 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:21.246324 | orchestrator | 2026-01-03 01:06:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:06:21.248610 | orchestrator | 2026-01-03 01:06:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:06:21.250152 | orchestrator | 2026-01-03 01:06:21 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:06:21.250187 | orchestrator | 2026-01-03 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:24.299122 | orchestrator | 2026-01-03 01:06:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:06:24.300690 | orchestrator | 2026-01-03 01:06:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:06:24.303203 | orchestrator | 2026-01-03 01:06:24 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:06:24.303256 | orchestrator | 2026-01-03 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:27.352918 | orchestrator | 2026-01-03 01:06:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:06:27.354422 | orchestrator | 2026-01-03 01:06:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:06:27.356447 | orchestrator | 2026-01-03 01:06:27 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:06:27.356527 | orchestrator | 2026-01-03 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:30.399527 | orchestrator | 2026-01-03 01:06:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:06:30.400951 | orchestrator | 2026-01-03 01:06:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:06:30.402051 | orchestrator | 2026-01-03 01:06:30 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:06:30.402122 | orchestrator | 2026-01-03 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:33.453351 | orchestrator | 2026-01-03 01:06:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:06:33.456178 | orchestrator | 2026-01-03 01:06:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:06:33.458701 | orchestrator | 2026-01-03 01:06:33 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:06:33.458743 | orchestrator | 2026-01-03 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:36.506222 | orchestrator | 2026-01-03 01:06:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:06:36.510241 | orchestrator | 2026-01-03 01:06:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:06:36.511711 | orchestrator | 2026-01-03 01:06:36 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:06:36.511759 | orchestrator | 2026-01-03 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:39.561462 | orchestrator | 2026-01-03 01:06:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:06:39.565093 | orchestrator | 2026-01-03 01:06:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:06:39.567036 | orchestrator | 2026-01-03 01:06:39 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:06:39.567269 | orchestrator | 2026-01-03 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:42.606255 | orchestrator | 2026-01-03 01:06:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:06:42.606791 | orchestrator | 2026-01-03 01:06:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:06:42.608246 | orchestrator | 2026-01-03 01:06:42 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:06:42.608298 | orchestrator | 2026-01-03 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:45.662127 | orchestrator | 2026-01-03 01:06:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:06:45.665369 | orchestrator | 2026-01-03 01:06:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:06:45.667974 | orchestrator | 2026-01-03 01:06:45 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:06:45.668019 | orchestrator | 2026-01-03 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:48.712795 | orchestrator | 2026-01-03 01:06:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:06:48.714004 | orchestrator | 2026-01-03 01:06:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:06:48.716258 | orchestrator | 2026-01-03 01:06:48 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:06:48.716297 | orchestrator | 2026-01-03 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:51.763567 | orchestrator | 2026-01-03 01:06:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:06:51.765415 | orchestrator | 2026-01-03 01:06:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:06:51.767942 | orchestrator | 2026-01-03 01:06:51 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:06:51.768053 | orchestrator | 2026-01-03 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:54.813585 | orchestrator | 2026-01-03 01:06:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:06:54.815238 | orchestrator | 2026-01-03 01:06:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:06:54.817775 | orchestrator | 2026-01-03 01:06:54 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:06:54.818544 | orchestrator | 2026-01-03 01:06:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:06:57.864833 | orchestrator | 2026-01-03 01:06:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:06:57.866613 | orchestrator | 2026-01-03 01:06:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:06:57.869786 | orchestrator | 2026-01-03 01:06:57 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:06:57.869847 | orchestrator | 2026-01-03 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:00.920651 | orchestrator | 2026-01-03 01:07:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:07:00.922489 | orchestrator | 2026-01-03 01:07:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:07:00.923582 | orchestrator | 2026-01-03 01:07:00 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:07:00.923809 | orchestrator | 2026-01-03 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:03.972275 | orchestrator | 2026-01-03 01:07:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:07:03.976167 | orchestrator | 2026-01-03 01:07:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:07:03.978715 | orchestrator | 2026-01-03 01:07:03 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:07:03.978775 | orchestrator | 2026-01-03 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:07.022522 | orchestrator | 2026-01-03 01:07:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:07:07.025200 | orchestrator | 2026-01-03 01:07:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:07:07.027329 | orchestrator | 2026-01-03 01:07:07 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:07:07.027665 | orchestrator | 2026-01-03 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:10.079726 | orchestrator | 2026-01-03 01:07:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:07:10.081705 | orchestrator | 2026-01-03 01:07:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:07:10.082655 | orchestrator | 2026-01-03 01:07:10 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:07:10.082703 | orchestrator | 2026-01-03 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:13.129561 | orchestrator | 2026-01-03 01:07:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:07:13.132382 | orchestrator | 2026-01-03 01:07:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:07:13.134713 | orchestrator | 2026-01-03 01:07:13 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:07:13.134799 | orchestrator | 2026-01-03 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:16.182290 | orchestrator | 2026-01-03 01:07:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:07:16.184682 | orchestrator | 2026-01-03 01:07:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:07:16.187357 | orchestrator | 2026-01-03 01:07:16 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:07:16.187475 | orchestrator | 2026-01-03 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:19.237346 | orchestrator | 2026-01-03 01:07:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:07:19.238962 | orchestrator | 2026-01-03 01:07:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:07:19.240416 | orchestrator | 2026-01-03 01:07:19 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:07:19.240452 | orchestrator | 2026-01-03 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:22.288342 | orchestrator | 2026-01-03 01:07:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:07:22.291310 | orchestrator | 2026-01-03 01:07:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:07:22.294083 | orchestrator | 2026-01-03 01:07:22 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:07:22.294148 | orchestrator | 2026-01-03 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:25.348730 | orchestrator | 2026-01-03 01:07:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:07:25.350645 | orchestrator | 2026-01-03 01:07:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:07:25.352666 | orchestrator | 2026-01-03 01:07:25 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:07:25.352705 | orchestrator | 2026-01-03 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:28.395817 | orchestrator | 2026-01-03 01:07:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:07:28.398154 | orchestrator | 2026-01-03 01:07:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:07:28.400351 | orchestrator | 2026-01-03 01:07:28 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:07:28.400738 | orchestrator | 2026-01-03 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:31.441634 | orchestrator | 2026-01-03 01:07:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:07:31.442974 | orchestrator | 2026-01-03 01:07:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:07:31.444746 | orchestrator | 2026-01-03 01:07:31 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:07:31.444794 | orchestrator | 2026-01-03 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:34.491308 | orchestrator | 2026-01-03 01:07:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:07:34.492111 | orchestrator | 2026-01-03 01:07:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:07:34.493324 | orchestrator | 2026-01-03 01:07:34 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:07:34.493361 | orchestrator | 2026-01-03 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:37.529415 | orchestrator | 2026-01-03 01:07:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:07:37.530242 | orchestrator | 2026-01-03 01:07:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:07:37.531535 | orchestrator | 2026-01-03 01:07:37 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state STARTED 2026-01-03 01:07:37.531551 | orchestrator | 2026-01-03 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:40.574199 | orchestrator | 2026-01-03 01:07:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:07:40.576390 | orchestrator | 2026-01-03 01:07:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:07:40.580313 | orchestrator | 2026-01-03 01:07:40 | INFO  | Task ac1aa15e-e589-4fc2-96a2-0b3f0355693b is in state SUCCESS 2026-01-03 01:07:40.582512 | orchestrator | 2026-01-03 01:07:40.582568 | orchestrator | 2026-01-03 01:07:40.582574 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-03 01:07:40.582579 | orchestrator | 2026-01-03 01:07:40.582583 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-03 01:07:40.582588 | orchestrator | Saturday 03 January 2026 01:05:28 +0000 (0:00:00.254) 0:00:00.254 ****** 2026-01-03 01:07:40.582592 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:07:40.582597 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:07:40.582601 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:07:40.582605 | orchestrator | 2026-01-03 01:07:40.582609 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-03 01:07:40.582613 | orchestrator | Saturday 03 January 2026 01:05:29 +0000 (0:00:00.332) 0:00:00.587 ****** 2026-01-03 01:07:40.582617 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-01-03 01:07:40.582622 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-01-03 01:07:40.582626 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-01-03 01:07:40.582630 | orchestrator | 2026-01-03 01:07:40.582633 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-01-03 01:07:40.582637 | orchestrator | 2026-01-03 01:07:40.582641 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-03 01:07:40.582645 | orchestrator | Saturday 03 January 2026 01:05:29 +0000 (0:00:00.451) 0:00:01.039 ****** 2026-01-03 01:07:40.582652 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 01:07:40.582659 | orchestrator | 2026-01-03 01:07:40.582664 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-01-03 01:07:40.582670 | orchestrator | Saturday 03 January 2026 01:05:30 +0000 (0:00:00.509) 0:00:01.549 ****** 2026-01-03 01:07:40.582681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-03 01:07:40.582689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-03 01:07:40.582720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-03 01:07:40.582727 | orchestrator | 2026-01-03 01:07:40.582736 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-01-03 01:07:40.582755 | orchestrator | Saturday 03 January 2026 01:05:30 +0000 (0:00:00.652) 0:00:02.201 ****** 2026-01-03 01:07:40.582761 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-01-03 01:07:40.582768 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-01-03 01:07:40.582775 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-03 01:07:40.582781 | orchestrator | 2026-01-03 01:07:40.582787 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-03 01:07:40.582793 | orchestrator | Saturday 03 January 2026 01:05:31 +0000 (0:00:00.825) 0:00:03.027 ****** 2026-01-03 01:07:40.582799 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-03 01:07:40.582806 | orchestrator | 2026-01-03 01:07:40.582812 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-01-03 01:07:40.582819 | orchestrator | Saturday 03 January 2026 01:05:32 +0000 (0:00:00.746) 0:00:03.774 ****** 2026-01-03 01:07:40.582839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-03 01:07:40.582847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-03 01:07:40.582852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-03 01:07:40.582861 | orchestrator | 2026-01-03 01:07:40.582865 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-01-03 01:07:40.582869 | orchestrator | Saturday 03 January 2026 01:05:33 +0000 (0:00:01.387) 0:00:05.162 ****** 2026-01-03 01:07:40.582873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-03 01:07:40.582877 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:07:40.582884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-03 01:07:40.582888 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:07:40.583022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-03 01:07:40.583030 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:07:40.583034 | orchestrator | 2026-01-03 01:07:40.583038 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-01-03 01:07:40.583042 | orchestrator | Saturday 03 January 2026 01:05:34 +0000 (0:00:00.376) 0:00:05.539 ****** 2026-01-03 01:07:40.583046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-03 01:07:40.583050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-03 01:07:40.583060 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:07:40.583064 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:07:40.583068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-03 01:07:40.583073 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:07:40.583076 | orchestrator | 2026-01-03 01:07:40.583097 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-01-03 01:07:40.583102 | orchestrator | Saturday 03 January 2026 01:05:34 +0000 (0:00:00.784) 0:00:06.324 ****** 2026-01-03 01:07:40.583107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-03 01:07:40.583115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-03 01:07:40.583125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-03 01:07:40.583130 | orchestrator | 2026-01-03 01:07:40.583134 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-01-03 01:07:40.583139 | orchestrator | Saturday 03 January 2026 01:05:36 +0000 (0:00:01.305) 0:00:07.629 ****** 2026-01-03 01:07:40.583150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-03 01:07:40.583193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-03 01:07:40.583198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-03 01:07:40.583203 | orchestrator | 2026-01-03 01:07:40.583208 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-01-03 01:07:40.583212 | orchestrator | Saturday 03 January 2026 01:05:37 +0000 (0:00:01.401) 0:00:09.030 ****** 2026-01-03 01:07:40.583217 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:07:40.583221 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:07:40.583250 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:07:40.583255 | orchestrator | 2026-01-03 01:07:40.583281 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-01-03 01:07:40.583286 | orchestrator | Saturday 03 January 2026 01:05:38 +0000 (0:00:00.480) 0:00:09.511 ****** 2026-01-03 01:07:40.583290 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-03 01:07:40.583296 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-03 01:07:40.583303 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-03 01:07:40.583308 | orchestrator | 2026-01-03 01:07:40.583312 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-01-03 01:07:40.583317 | orchestrator | Saturday 03 January 2026 01:05:39 +0000 (0:00:01.279) 0:00:10.790 ****** 2026-01-03 01:07:40.583322 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-03 01:07:40.583326 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-03 01:07:40.583331 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-03 01:07:40.583336 | orchestrator | 2026-01-03 01:07:40.583340 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-01-03 01:07:40.583345 | orchestrator | Saturday 03 January 2026 01:05:40 +0000 (0:00:01.336) 0:00:12.127 ****** 2026-01-03 01:07:40.583356 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-03 01:07:40.583361 | orchestrator | 2026-01-03 01:07:40.583366 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-01-03 01:07:40.583370 | orchestrator | Saturday 03 January 2026 01:05:41 +0000 (0:00:00.721) 0:00:12.848 ****** 2026-01-03 01:07:40.583375 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-01-03 01:07:40.583379 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-01-03 01:07:40.583384 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:07:40.583389 | orchestrator | ok: [testbed-node-1] 2026-01-03 01:07:40.583394 | orchestrator | ok: [testbed-node-2] 2026-01-03 01:07:40.583398 | orchestrator | 2026-01-03 01:07:40.583405 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-01-03 01:07:40.583411 | orchestrator | Saturday 03 January 2026 01:05:42 +0000 (0:00:00.711) 0:00:13.560 ****** 2026-01-03 01:07:40.583417 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:07:40.583423 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:07:40.583429 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:07:40.583436 | orchestrator | 2026-01-03 01:07:40.583442 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-01-03 01:07:40.583448 | orchestrator | Saturday 03 January 2026 01:05:42 +0000 (0:00:00.530) 0:00:14.090 ****** 2026-01-03 01:07:40.583455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1108264, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8384244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1108264, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8384244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1108264, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8384244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1108351, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8556943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1108351, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8556943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1108351, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8556943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1108292, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8419302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1108292, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8419302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1108292, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8419302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1108356, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9172254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1108356, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9172254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1108356, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9172254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1108310, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8460994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1108310, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8460994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1108310, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8460994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1108340, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8542638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1108340, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8542638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1108340, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8542638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1108262, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8370392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1108262, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8370392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1108262, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8370392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1108276, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8387356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1108276, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8387356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1108276, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8387356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1108296, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8419302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1108296, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8419302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1108296, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8419302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1108326, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8505278, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1108326, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8505278, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.583995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1108326, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8505278, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1108349, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.855271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1108349, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.855271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1108349, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.855271, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1108280, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8403425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1108280, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8403425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1108280, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8403425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1108337, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8527055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1108337, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8527055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1108337, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8527055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1108314, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8487782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1108314, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8487782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1108314, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8487782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1108308, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8447053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1108308, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8447053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1108308, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8447053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1108303, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8435707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1108303, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8435707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1108303, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8435707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1108333, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8518684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1108333, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8518684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1108333, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8518684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1108299, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8430412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1108299, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8430412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1108299, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8430412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1108344, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8550124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1108344, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8550124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1108344, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.8550124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1109110, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9871626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1109110, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9871626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1109110, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9871626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1108894, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9426765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1108894, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9426765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1108894, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9426765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1108744, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9200296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1108744, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9200296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1108744, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9200296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1108927, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9446943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1108927, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9446943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1108927, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9446943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1108729, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.91779, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1108729, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.91779, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1108729, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.91779, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1109081, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9790914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1109081, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9790914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1109081, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9790914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1108931, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9737074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1108931, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9737074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1108931, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9737074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1109085, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9790914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1109085, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9790914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1109085, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9790914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1109099, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9845679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1109099, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9845679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1109099, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9845679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1109079, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9774961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1109079, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9774961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1109079, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9774961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1108920, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9440575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1108920, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9440575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1108920, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9440575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1108886, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9394317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1108886, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9394317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1108886, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9394317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1108914, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9431207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1108914, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9431207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1108914, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9431207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1108749, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9381058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1108749, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9381058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1108749, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9381058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1108924, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9440575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1108924, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9440575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1108924, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9440575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1109094, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.983684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1109094, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.983684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1109094, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.983684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1109091, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.981549, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1109091, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.981549, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1109091, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.981549, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1108731, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.918076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1108731, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.918076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1108731, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.918076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1108736, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9191377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1108736, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9191377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1108736, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9191377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1109070, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9774961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1109070, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9774961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1109070, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.9774961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1109089, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.979811, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1109089, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.979811, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1109089, 'dev': 107, 'nlink': 1, 'atime': 1767398570.0, 'mtime': 1767398570.0, 'ctime': 1767399497.979811, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-03 01:07:40.584834 | orchestrator | 2026-01-03 01:07:40.584838 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-01-03 01:07:40.584843 | orchestrator | Saturday 03 January 2026 01:06:20 +0000 (0:00:37.790) 0:00:51.881 ****** 2026-01-03 01:07:40.584847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-03 01:07:40.584851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-03 01:07:40.584858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-03 01:07:40.584862 | orchestrator | 2026-01-03 01:07:40.584866 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-01-03 01:07:40.584870 | orchestrator | Saturday 03 January 2026 01:06:21 +0000 (0:00:01.044) 0:00:52.926 ****** 2026-01-03 01:07:40.584874 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:07:40.584878 | orchestrator | 2026-01-03 01:07:40.584882 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-01-03 01:07:40.584886 | orchestrator | Saturday 03 January 2026 01:06:24 +0000 (0:00:02.642) 0:00:55.568 ****** 2026-01-03 01:07:40.584913 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:07:40.584921 | orchestrator | 2026-01-03 01:07:40.584927 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-03 01:07:40.584937 | orchestrator | Saturday 03 January 2026 01:06:26 +0000 (0:00:02.805) 0:00:58.374 ****** 2026-01-03 01:07:40.584944 | orchestrator | 2026-01-03 01:07:40.584950 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-03 01:07:40.584960 | orchestrator | Saturday 03 January 2026 01:06:26 +0000 (0:00:00.062) 0:00:58.436 ****** 2026-01-03 01:07:40.584964 | orchestrator | 2026-01-03 01:07:40.584968 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-03 01:07:40.584972 | orchestrator | Saturday 03 January 2026 01:06:27 +0000 (0:00:00.059) 0:00:58.496 ****** 2026-01-03 01:07:40.584976 | orchestrator | 2026-01-03 01:07:40.584980 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-01-03 01:07:40.584983 | orchestrator | Saturday 03 January 2026 01:06:27 +0000 (0:00:00.242) 0:00:58.739 ****** 2026-01-03 01:07:40.584987 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:07:40.584991 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:07:40.584995 | orchestrator | changed: [testbed-node-0] 2026-01-03 01:07:40.585000 | orchestrator | 2026-01-03 01:07:40.585006 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-01-03 01:07:40.585012 | orchestrator | Saturday 03 January 2026 01:06:29 +0000 (0:00:01.809) 0:01:00.548 ****** 2026-01-03 01:07:40.585017 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:07:40.585023 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:07:40.585029 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-01-03 01:07:40.585036 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-01-03 01:07:40.585042 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-01-03 01:07:40.585048 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:07:40.585054 | orchestrator | 2026-01-03 01:07:40.585060 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-01-03 01:07:40.585066 | orchestrator | Saturday 03 January 2026 01:07:07 +0000 (0:00:38.797) 0:01:39.346 ****** 2026-01-03 01:07:40.585072 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:07:40.585078 | orchestrator | changed: [testbed-node-2] 2026-01-03 01:07:40.585084 | orchestrator | changed: [testbed-node-1] 2026-01-03 01:07:40.585090 | orchestrator | 2026-01-03 01:07:40.585095 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-01-03 01:07:40.585099 | orchestrator | Saturday 03 January 2026 01:07:32 +0000 (0:00:24.795) 0:02:04.141 ****** 2026-01-03 01:07:40.585103 | orchestrator | ok: [testbed-node-0] 2026-01-03 01:07:40.585108 | orchestrator | 2026-01-03 01:07:40.585114 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-01-03 01:07:40.585122 | orchestrator | Saturday 03 January 2026 01:07:34 +0000 (0:00:02.001) 0:02:06.143 ****** 2026-01-03 01:07:40.585131 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:07:40.585138 | orchestrator | skipping: [testbed-node-1] 2026-01-03 01:07:40.585143 | orchestrator | skipping: [testbed-node-2] 2026-01-03 01:07:40.585149 | orchestrator | 2026-01-03 01:07:40.585156 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-01-03 01:07:40.585162 | orchestrator | Saturday 03 January 2026 01:07:35 +0000 (0:00:00.507) 0:02:06.651 ****** 2026-01-03 01:07:40.585169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-01-03 01:07:40.585176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-01-03 01:07:40.585188 | orchestrator | 2026-01-03 01:07:40.585194 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-01-03 01:07:40.585201 | orchestrator | Saturday 03 January 2026 01:07:37 +0000 (0:00:02.253) 0:02:08.904 ****** 2026-01-03 01:07:40.585207 | orchestrator | skipping: [testbed-node-0] 2026-01-03 01:07:40.585213 | orchestrator | 2026-01-03 01:07:40.585219 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-03 01:07:40.585226 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-03 01:07:40.585233 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-03 01:07:40.585244 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-03 01:07:40.585250 | orchestrator | 2026-01-03 01:07:40.585258 | orchestrator | 2026-01-03 01:07:40.585265 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-03 01:07:40.585271 | orchestrator | Saturday 03 January 2026 01:07:37 +0000 (0:00:00.267) 0:02:09.172 ****** 2026-01-03 01:07:40.585277 | orchestrator | =============================================================================== 2026-01-03 01:07:40.585285 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.80s 2026-01-03 01:07:40.585290 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.79s 2026-01-03 01:07:40.585294 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 24.80s 2026-01-03 01:07:40.585299 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.81s 2026-01-03 01:07:40.585303 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.64s 2026-01-03 01:07:40.585312 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.25s 2026-01-03 01:07:40.585316 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.00s 2026-01-03 01:07:40.585321 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.81s 2026-01-03 01:07:40.585325 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.40s 2026-01-03 01:07:40.585330 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.39s 2026-01-03 01:07:40.585335 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.34s 2026-01-03 01:07:40.585339 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.31s 2026-01-03 01:07:40.585343 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.28s 2026-01-03 01:07:40.585346 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.05s 2026-01-03 01:07:40.585350 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.83s 2026-01-03 01:07:40.585354 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.78s 2026-01-03 01:07:40.585358 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.75s 2026-01-03 01:07:40.585362 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.72s 2026-01-03 01:07:40.585366 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.71s 2026-01-03 01:07:40.585370 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.65s 2026-01-03 01:07:40.585454 | orchestrator | 2026-01-03 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:43.632141 | orchestrator | 2026-01-03 01:07:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:07:43.633705 | orchestrator | 2026-01-03 01:07:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:07:43.633749 | orchestrator | 2026-01-03 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:46.687070 | orchestrator | 2026-01-03 01:07:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:07:46.688992 | orchestrator | 2026-01-03 01:07:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:07:46.689046 | orchestrator | 2026-01-03 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:49.742201 | orchestrator | 2026-01-03 01:07:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:07:49.744444 | orchestrator | 2026-01-03 01:07:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:07:49.744532 | orchestrator | 2026-01-03 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:52.792782 | orchestrator | 2026-01-03 01:07:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:07:52.794725 | orchestrator | 2026-01-03 01:07:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:07:52.794795 | orchestrator | 2026-01-03 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:55.846671 | orchestrator | 2026-01-03 01:07:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:07:55.848167 | orchestrator | 2026-01-03 01:07:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:07:55.848217 | orchestrator | 2026-01-03 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:07:58.896293 | orchestrator | 2026-01-03 01:07:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:07:58.900359 | orchestrator | 2026-01-03 01:07:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:07:58.900437 | orchestrator | 2026-01-03 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:01.949642 | orchestrator | 2026-01-03 01:08:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:08:01.950602 | orchestrator | 2026-01-03 01:08:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:08:01.950680 | orchestrator | 2026-01-03 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:04.999951 | orchestrator | 2026-01-03 01:08:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:08:05.002154 | orchestrator | 2026-01-03 01:08:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:08:05.002364 | orchestrator | 2026-01-03 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:08.047180 | orchestrator | 2026-01-03 01:08:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:08:08.047871 | orchestrator | 2026-01-03 01:08:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:08:08.047999 | orchestrator | 2026-01-03 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:11.102579 | orchestrator | 2026-01-03 01:08:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:08:11.106449 | orchestrator | 2026-01-03 01:08:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:08:11.106494 | orchestrator | 2026-01-03 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:14.159292 | orchestrator | 2026-01-03 01:08:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:08:14.165505 | orchestrator | 2026-01-03 01:08:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:08:14.165566 | orchestrator | 2026-01-03 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:17.219371 | orchestrator | 2026-01-03 01:08:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:08:17.221197 | orchestrator | 2026-01-03 01:08:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:08:17.221251 | orchestrator | 2026-01-03 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:20.268885 | orchestrator | 2026-01-03 01:08:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:08:20.268984 | orchestrator | 2026-01-03 01:08:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:08:20.268991 | orchestrator | 2026-01-03 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:23.323431 | orchestrator | 2026-01-03 01:08:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:08:23.325107 | orchestrator | 2026-01-03 01:08:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:08:23.325147 | orchestrator | 2026-01-03 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:26.382062 | orchestrator | 2026-01-03 01:08:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:08:26.383181 | orchestrator | 2026-01-03 01:08:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:08:26.383236 | orchestrator | 2026-01-03 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:29.443335 | orchestrator | 2026-01-03 01:08:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:08:29.446249 | orchestrator | 2026-01-03 01:08:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:08:29.446337 | orchestrator | 2026-01-03 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:32.496676 | orchestrator | 2026-01-03 01:08:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:08:32.498189 | orchestrator | 2026-01-03 01:08:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:08:32.498228 | orchestrator | 2026-01-03 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:35.544544 | orchestrator | 2026-01-03 01:08:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:08:35.546131 | orchestrator | 2026-01-03 01:08:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:08:35.546245 | orchestrator | 2026-01-03 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:38.594870 | orchestrator | 2026-01-03 01:08:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:08:38.596175 | orchestrator | 2026-01-03 01:08:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:08:38.596433 | orchestrator | 2026-01-03 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:41.658934 | orchestrator | 2026-01-03 01:08:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:08:41.660463 | orchestrator | 2026-01-03 01:08:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:08:41.660512 | orchestrator | 2026-01-03 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:44.735765 | orchestrator | 2026-01-03 01:08:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:08:44.737375 | orchestrator | 2026-01-03 01:08:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:08:44.737434 | orchestrator | 2026-01-03 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:47.794050 | orchestrator | 2026-01-03 01:08:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:08:47.796590 | orchestrator | 2026-01-03 01:08:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:08:47.796660 | orchestrator | 2026-01-03 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:50.864292 | orchestrator | 2026-01-03 01:08:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:08:50.869401 | orchestrator | 2026-01-03 01:08:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:08:50.869486 | orchestrator | 2026-01-03 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:53.915972 | orchestrator | 2026-01-03 01:08:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:08:53.916595 | orchestrator | 2026-01-03 01:08:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:08:53.916730 | orchestrator | 2026-01-03 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:08:56.967308 | orchestrator | 2026-01-03 01:08:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:08:56.969019 | orchestrator | 2026-01-03 01:08:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:08:56.969090 | orchestrator | 2026-01-03 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:00.034736 | orchestrator | 2026-01-03 01:09:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:09:00.037656 | orchestrator | 2026-01-03 01:09:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:09:00.037717 | orchestrator | 2026-01-03 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:03.086703 | orchestrator | 2026-01-03 01:09:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:09:03.088270 | orchestrator | 2026-01-03 01:09:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:09:03.088329 | orchestrator | 2026-01-03 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:06.134258 | orchestrator | 2026-01-03 01:09:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:09:06.136001 | orchestrator | 2026-01-03 01:09:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:09:06.136042 | orchestrator | 2026-01-03 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:09.181197 | orchestrator | 2026-01-03 01:09:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:09:09.182448 | orchestrator | 2026-01-03 01:09:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:09:09.182574 | orchestrator | 2026-01-03 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:12.233205 | orchestrator | 2026-01-03 01:09:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:09:12.235421 | orchestrator | 2026-01-03 01:09:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:09:12.235482 | orchestrator | 2026-01-03 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:15.282166 | orchestrator | 2026-01-03 01:09:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:09:15.283603 | orchestrator | 2026-01-03 01:09:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:09:15.283704 | orchestrator | 2026-01-03 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:18.331076 | orchestrator | 2026-01-03 01:09:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:09:18.332955 | orchestrator | 2026-01-03 01:09:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:09:18.333098 | orchestrator | 2026-01-03 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:21.377953 | orchestrator | 2026-01-03 01:09:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:09:21.380017 | orchestrator | 2026-01-03 01:09:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:09:21.380063 | orchestrator | 2026-01-03 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:24.432460 | orchestrator | 2026-01-03 01:09:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:09:24.435479 | orchestrator | 2026-01-03 01:09:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:09:24.435527 | orchestrator | 2026-01-03 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:27.492893 | orchestrator | 2026-01-03 01:09:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:09:27.494878 | orchestrator | 2026-01-03 01:09:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:09:27.495000 | orchestrator | 2026-01-03 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:30.547106 | orchestrator | 2026-01-03 01:09:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:09:30.548438 | orchestrator | 2026-01-03 01:09:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:09:30.548483 | orchestrator | 2026-01-03 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:33.603307 | orchestrator | 2026-01-03 01:09:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:09:33.607278 | orchestrator | 2026-01-03 01:09:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:09:33.607360 | orchestrator | 2026-01-03 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:36.657158 | orchestrator | 2026-01-03 01:09:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:09:36.660119 | orchestrator | 2026-01-03 01:09:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:09:36.660189 | orchestrator | 2026-01-03 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:39.712576 | orchestrator | 2026-01-03 01:09:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:09:39.713952 | orchestrator | 2026-01-03 01:09:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:09:39.714001 | orchestrator | 2026-01-03 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:42.762958 | orchestrator | 2026-01-03 01:09:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:09:42.764648 | orchestrator | 2026-01-03 01:09:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:09:42.764848 | orchestrator | 2026-01-03 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:45.813147 | orchestrator | 2026-01-03 01:09:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:09:45.815468 | orchestrator | 2026-01-03 01:09:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:09:45.816907 | orchestrator | 2026-01-03 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:48.872056 | orchestrator | 2026-01-03 01:09:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:09:48.873498 | orchestrator | 2026-01-03 01:09:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:09:48.873549 | orchestrator | 2026-01-03 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:51.920523 | orchestrator | 2026-01-03 01:09:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:09:51.922584 | orchestrator | 2026-01-03 01:09:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:09:51.922647 | orchestrator | 2026-01-03 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:54.976048 | orchestrator | 2026-01-03 01:09:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:09:54.977237 | orchestrator | 2026-01-03 01:09:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:09:54.977290 | orchestrator | 2026-01-03 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:09:58.025618 | orchestrator | 2026-01-03 01:09:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:09:58.028378 | orchestrator | 2026-01-03 01:09:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:09:58.028459 | orchestrator | 2026-01-03 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:01.093048 | orchestrator | 2026-01-03 01:10:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:10:01.095980 | orchestrator | 2026-01-03 01:10:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:10:01.096057 | orchestrator | 2026-01-03 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:04.147385 | orchestrator | 2026-01-03 01:10:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:10:04.149142 | orchestrator | 2026-01-03 01:10:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:10:04.149185 | orchestrator | 2026-01-03 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:07.196747 | orchestrator | 2026-01-03 01:10:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:10:07.198400 | orchestrator | 2026-01-03 01:10:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:10:07.198450 | orchestrator | 2026-01-03 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:10.249383 | orchestrator | 2026-01-03 01:10:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:10:10.251315 | orchestrator | 2026-01-03 01:10:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:10:10.251409 | orchestrator | 2026-01-03 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:13.298096 | orchestrator | 2026-01-03 01:10:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:10:13.299843 | orchestrator | 2026-01-03 01:10:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:10:13.299915 | orchestrator | 2026-01-03 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:16.355424 | orchestrator | 2026-01-03 01:10:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:10:16.358379 | orchestrator | 2026-01-03 01:10:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:10:16.359122 | orchestrator | 2026-01-03 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:19.408897 | orchestrator | 2026-01-03 01:10:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:10:19.410830 | orchestrator | 2026-01-03 01:10:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:10:19.410886 | orchestrator | 2026-01-03 01:10:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:22.458944 | orchestrator | 2026-01-03 01:10:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:10:22.461949 | orchestrator | 2026-01-03 01:10:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:10:22.462005 | orchestrator | 2026-01-03 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:25.510363 | orchestrator | 2026-01-03 01:10:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:10:25.512475 | orchestrator | 2026-01-03 01:10:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:10:25.512614 | orchestrator | 2026-01-03 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:28.559098 | orchestrator | 2026-01-03 01:10:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:10:28.560608 | orchestrator | 2026-01-03 01:10:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:10:28.560685 | orchestrator | 2026-01-03 01:10:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:31.610161 | orchestrator | 2026-01-03 01:10:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:10:31.612834 | orchestrator | 2026-01-03 01:10:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:10:31.612986 | orchestrator | 2026-01-03 01:10:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:34.658115 | orchestrator | 2026-01-03 01:10:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:10:34.659404 | orchestrator | 2026-01-03 01:10:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:10:34.659446 | orchestrator | 2026-01-03 01:10:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:37.707041 | orchestrator | 2026-01-03 01:10:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:10:37.710349 | orchestrator | 2026-01-03 01:10:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:10:37.710468 | orchestrator | 2026-01-03 01:10:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:40.761107 | orchestrator | 2026-01-03 01:10:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:10:40.764210 | orchestrator | 2026-01-03 01:10:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:10:40.764249 | orchestrator | 2026-01-03 01:10:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:43.811048 | orchestrator | 2026-01-03 01:10:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:10:43.812980 | orchestrator | 2026-01-03 01:10:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:10:43.813066 | orchestrator | 2026-01-03 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:46.858185 | orchestrator | 2026-01-03 01:10:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:10:46.859780 | orchestrator | 2026-01-03 01:10:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:10:46.859985 | orchestrator | 2026-01-03 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:49.903896 | orchestrator | 2026-01-03 01:10:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:10:49.906362 | orchestrator | 2026-01-03 01:10:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:10:49.906417 | orchestrator | 2026-01-03 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:52.954334 | orchestrator | 2026-01-03 01:10:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:10:52.956725 | orchestrator | 2026-01-03 01:10:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:10:52.956861 | orchestrator | 2026-01-03 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:56.010000 | orchestrator | 2026-01-03 01:10:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:10:56.011146 | orchestrator | 2026-01-03 01:10:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:10:56.011208 | orchestrator | 2026-01-03 01:10:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:10:59.060799 | orchestrator | 2026-01-03 01:10:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:10:59.067668 | orchestrator | 2026-01-03 01:10:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:10:59.067743 | orchestrator | 2026-01-03 01:10:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:02.117278 | orchestrator | 2026-01-03 01:11:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:11:02.118971 | orchestrator | 2026-01-03 01:11:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:11:02.119010 | orchestrator | 2026-01-03 01:11:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:05.169739 | orchestrator | 2026-01-03 01:11:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:11:05.170818 | orchestrator | 2026-01-03 01:11:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:11:05.170859 | orchestrator | 2026-01-03 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:08.221962 | orchestrator | 2026-01-03 01:11:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:11:08.223806 | orchestrator | 2026-01-03 01:11:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:11:08.223843 | orchestrator | 2026-01-03 01:11:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:11.277837 | orchestrator | 2026-01-03 01:11:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:11:11.280079 | orchestrator | 2026-01-03 01:11:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:11:11.280130 | orchestrator | 2026-01-03 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:14.328771 | orchestrator | 2026-01-03 01:11:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:11:14.329694 | orchestrator | 2026-01-03 01:11:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:11:14.329749 | orchestrator | 2026-01-03 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:17.385313 | orchestrator | 2026-01-03 01:11:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:11:17.386226 | orchestrator | 2026-01-03 01:11:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:11:17.386271 | orchestrator | 2026-01-03 01:11:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:20.431527 | orchestrator | 2026-01-03 01:11:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:11:20.432609 | orchestrator | 2026-01-03 01:11:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:11:20.432645 | orchestrator | 2026-01-03 01:11:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:23.483172 | orchestrator | 2026-01-03 01:11:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:11:23.485276 | orchestrator | 2026-01-03 01:11:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:11:23.485344 | orchestrator | 2026-01-03 01:11:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:26.534194 | orchestrator | 2026-01-03 01:11:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:11:26.536820 | orchestrator | 2026-01-03 01:11:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:11:26.536893 | orchestrator | 2026-01-03 01:11:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:29.585549 | orchestrator | 2026-01-03 01:11:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:11:29.587407 | orchestrator | 2026-01-03 01:11:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:11:29.587474 | orchestrator | 2026-01-03 01:11:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:32.642453 | orchestrator | 2026-01-03 01:11:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:11:32.643893 | orchestrator | 2026-01-03 01:11:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:11:32.644026 | orchestrator | 2026-01-03 01:11:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:35.688399 | orchestrator | 2026-01-03 01:11:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:11:35.690188 | orchestrator | 2026-01-03 01:11:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:11:35.690273 | orchestrator | 2026-01-03 01:11:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:38.737565 | orchestrator | 2026-01-03 01:11:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:11:38.739211 | orchestrator | 2026-01-03 01:11:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:11:38.739343 | orchestrator | 2026-01-03 01:11:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:41.787711 | orchestrator | 2026-01-03 01:11:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:11:41.789562 | orchestrator | 2026-01-03 01:11:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:11:41.789635 | orchestrator | 2026-01-03 01:11:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:44.841794 | orchestrator | 2026-01-03 01:11:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:11:44.841891 | orchestrator | 2026-01-03 01:11:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:11:44.842089 | orchestrator | 2026-01-03 01:11:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:47.890189 | orchestrator | 2026-01-03 01:11:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:11:47.893632 | orchestrator | 2026-01-03 01:11:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:11:47.893735 | orchestrator | 2026-01-03 01:11:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:50.945853 | orchestrator | 2026-01-03 01:11:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:11:50.948687 | orchestrator | 2026-01-03 01:11:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:11:50.948771 | orchestrator | 2026-01-03 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:54.009733 | orchestrator | 2026-01-03 01:11:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:11:54.014492 | orchestrator | 2026-01-03 01:11:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:11:54.014559 | orchestrator | 2026-01-03 01:11:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:11:57.058305 | orchestrator | 2026-01-03 01:11:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:11:57.059831 | orchestrator | 2026-01-03 01:11:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:11:57.059883 | orchestrator | 2026-01-03 01:11:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:00.108197 | orchestrator | 2026-01-03 01:12:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:12:00.110813 | orchestrator | 2026-01-03 01:12:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:12:00.110884 | orchestrator | 2026-01-03 01:12:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:03.175937 | orchestrator | 2026-01-03 01:12:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:12:03.177542 | orchestrator | 2026-01-03 01:12:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:12:03.177584 | orchestrator | 2026-01-03 01:12:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:06.236492 | orchestrator | 2026-01-03 01:12:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:12:06.237588 | orchestrator | 2026-01-03 01:12:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:12:06.237627 | orchestrator | 2026-01-03 01:12:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:09.295474 | orchestrator | 2026-01-03 01:12:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:12:09.298197 | orchestrator | 2026-01-03 01:12:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:12:09.298357 | orchestrator | 2026-01-03 01:12:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:12.343051 | orchestrator | 2026-01-03 01:12:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:12:12.344404 | orchestrator | 2026-01-03 01:12:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:12:12.344445 | orchestrator | 2026-01-03 01:12:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:15.410493 | orchestrator | 2026-01-03 01:12:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:12:15.412784 | orchestrator | 2026-01-03 01:12:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:12:15.412858 | orchestrator | 2026-01-03 01:12:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:18.498600 | orchestrator | 2026-01-03 01:12:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:12:18.499411 | orchestrator | 2026-01-03 01:12:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:12:18.499467 | orchestrator | 2026-01-03 01:12:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:21.545677 | orchestrator | 2026-01-03 01:12:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:12:21.547798 | orchestrator | 2026-01-03 01:12:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:12:21.547880 | orchestrator | 2026-01-03 01:12:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:24.600503 | orchestrator | 2026-01-03 01:12:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:12:24.602244 | orchestrator | 2026-01-03 01:12:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:12:24.602320 | orchestrator | 2026-01-03 01:12:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:27.653086 | orchestrator | 2026-01-03 01:12:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:12:27.656697 | orchestrator | 2026-01-03 01:12:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:12:27.656751 | orchestrator | 2026-01-03 01:12:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:30.703707 | orchestrator | 2026-01-03 01:12:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:12:30.705163 | orchestrator | 2026-01-03 01:12:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:12:30.705225 | orchestrator | 2026-01-03 01:12:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:33.745558 | orchestrator | 2026-01-03 01:12:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:12:33.747252 | orchestrator | 2026-01-03 01:12:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:12:33.747317 | orchestrator | 2026-01-03 01:12:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:36.790698 | orchestrator | 2026-01-03 01:12:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:12:36.792714 | orchestrator | 2026-01-03 01:12:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:12:36.792779 | orchestrator | 2026-01-03 01:12:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:39.840017 | orchestrator | 2026-01-03 01:12:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:12:39.842290 | orchestrator | 2026-01-03 01:12:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:12:39.842340 | orchestrator | 2026-01-03 01:12:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:42.889296 | orchestrator | 2026-01-03 01:12:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:12:42.893042 | orchestrator | 2026-01-03 01:12:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:12:42.893140 | orchestrator | 2026-01-03 01:12:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:45.939593 | orchestrator | 2026-01-03 01:12:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:12:45.942665 | orchestrator | 2026-01-03 01:12:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:12:45.942750 | orchestrator | 2026-01-03 01:12:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:48.998425 | orchestrator | 2026-01-03 01:12:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:12:49.000520 | orchestrator | 2026-01-03 01:12:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:12:49.000641 | orchestrator | 2026-01-03 01:12:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:52.047225 | orchestrator | 2026-01-03 01:12:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:12:52.049414 | orchestrator | 2026-01-03 01:12:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:12:52.049528 | orchestrator | 2026-01-03 01:12:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:55.089138 | orchestrator | 2026-01-03 01:12:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:12:55.090586 | orchestrator | 2026-01-03 01:12:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:12:55.090958 | orchestrator | 2026-01-03 01:12:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:12:58.141633 | orchestrator | 2026-01-03 01:12:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:12:58.143126 | orchestrator | 2026-01-03 01:12:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:12:58.143309 | orchestrator | 2026-01-03 01:12:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:01.194499 | orchestrator | 2026-01-03 01:13:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:13:01.196333 | orchestrator | 2026-01-03 01:13:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:13:01.196393 | orchestrator | 2026-01-03 01:13:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:04.247707 | orchestrator | 2026-01-03 01:13:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:13:04.250473 | orchestrator | 2026-01-03 01:13:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:13:04.250608 | orchestrator | 2026-01-03 01:13:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:07.294495 | orchestrator | 2026-01-03 01:13:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:13:07.295650 | orchestrator | 2026-01-03 01:13:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:13:07.295704 | orchestrator | 2026-01-03 01:13:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:10.342225 | orchestrator | 2026-01-03 01:13:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:13:10.343637 | orchestrator | 2026-01-03 01:13:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:13:10.343688 | orchestrator | 2026-01-03 01:13:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:13.395728 | orchestrator | 2026-01-03 01:13:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:13:13.398119 | orchestrator | 2026-01-03 01:13:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:13:13.398162 | orchestrator | 2026-01-03 01:13:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:16.452963 | orchestrator | 2026-01-03 01:13:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:13:16.453510 | orchestrator | 2026-01-03 01:13:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:13:16.453535 | orchestrator | 2026-01-03 01:13:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:19.524343 | orchestrator | 2026-01-03 01:13:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:13:19.526435 | orchestrator | 2026-01-03 01:13:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:13:19.526490 | orchestrator | 2026-01-03 01:13:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:22.581346 | orchestrator | 2026-01-03 01:13:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:13:22.583244 | orchestrator | 2026-01-03 01:13:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:13:22.583305 | orchestrator | 2026-01-03 01:13:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:25.638178 | orchestrator | 2026-01-03 01:13:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:13:25.638958 | orchestrator | 2026-01-03 01:13:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:13:25.639001 | orchestrator | 2026-01-03 01:13:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:28.697544 | orchestrator | 2026-01-03 01:13:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:13:28.698690 | orchestrator | 2026-01-03 01:13:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:13:28.698725 | orchestrator | 2026-01-03 01:13:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:31.772775 | orchestrator | 2026-01-03 01:13:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:13:31.773841 | orchestrator | 2026-01-03 01:13:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:13:31.773905 | orchestrator | 2026-01-03 01:13:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:34.826198 | orchestrator | 2026-01-03 01:13:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:13:34.826816 | orchestrator | 2026-01-03 01:13:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:13:34.826872 | orchestrator | 2026-01-03 01:13:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:37.876048 | orchestrator | 2026-01-03 01:13:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:13:37.877513 | orchestrator | 2026-01-03 01:13:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:13:37.877570 | orchestrator | 2026-01-03 01:13:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:40.925378 | orchestrator | 2026-01-03 01:13:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:13:40.926866 | orchestrator | 2026-01-03 01:13:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:13:40.927010 | orchestrator | 2026-01-03 01:13:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:43.984742 | orchestrator | 2026-01-03 01:13:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:13:43.987324 | orchestrator | 2026-01-03 01:13:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:13:43.987394 | orchestrator | 2026-01-03 01:13:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:47.044448 | orchestrator | 2026-01-03 01:13:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:13:47.046040 | orchestrator | 2026-01-03 01:13:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:13:47.046085 | orchestrator | 2026-01-03 01:13:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:50.100490 | orchestrator | 2026-01-03 01:13:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:13:50.102457 | orchestrator | 2026-01-03 01:13:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:13:50.102522 | orchestrator | 2026-01-03 01:13:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:53.151650 | orchestrator | 2026-01-03 01:13:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:13:53.151924 | orchestrator | 2026-01-03 01:13:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:13:53.151949 | orchestrator | 2026-01-03 01:13:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:56.200636 | orchestrator | 2026-01-03 01:13:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:13:56.203259 | orchestrator | 2026-01-03 01:13:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:13:56.203325 | orchestrator | 2026-01-03 01:13:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:13:59.245610 | orchestrator | 2026-01-03 01:13:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:13:59.246414 | orchestrator | 2026-01-03 01:13:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:13:59.246524 | orchestrator | 2026-01-03 01:13:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:02.294818 | orchestrator | 2026-01-03 01:14:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:14:02.296862 | orchestrator | 2026-01-03 01:14:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:14:02.296979 | orchestrator | 2026-01-03 01:14:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:05.347608 | orchestrator | 2026-01-03 01:14:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:14:05.350265 | orchestrator | 2026-01-03 01:14:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:14:05.350350 | orchestrator | 2026-01-03 01:14:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:08.399361 | orchestrator | 2026-01-03 01:14:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:14:08.402095 | orchestrator | 2026-01-03 01:14:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:14:08.402181 | orchestrator | 2026-01-03 01:14:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:11.448806 | orchestrator | 2026-01-03 01:14:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:14:11.451020 | orchestrator | 2026-01-03 01:14:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:14:11.451148 | orchestrator | 2026-01-03 01:14:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:14.498251 | orchestrator | 2026-01-03 01:14:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:14:14.500066 | orchestrator | 2026-01-03 01:14:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:14:14.500169 | orchestrator | 2026-01-03 01:14:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:17.546499 | orchestrator | 2026-01-03 01:14:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:14:17.548268 | orchestrator | 2026-01-03 01:14:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:14:17.548350 | orchestrator | 2026-01-03 01:14:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:20.595199 | orchestrator | 2026-01-03 01:14:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:14:20.596869 | orchestrator | 2026-01-03 01:14:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:14:20.596944 | orchestrator | 2026-01-03 01:14:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:23.650989 | orchestrator | 2026-01-03 01:14:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:14:23.652612 | orchestrator | 2026-01-03 01:14:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:14:23.652672 | orchestrator | 2026-01-03 01:14:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:26.699273 | orchestrator | 2026-01-03 01:14:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:14:26.703299 | orchestrator | 2026-01-03 01:14:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:14:26.703385 | orchestrator | 2026-01-03 01:14:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:29.754928 | orchestrator | 2026-01-03 01:14:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:14:29.756732 | orchestrator | 2026-01-03 01:14:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:14:29.756797 | orchestrator | 2026-01-03 01:14:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:32.806582 | orchestrator | 2026-01-03 01:14:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:14:32.808027 | orchestrator | 2026-01-03 01:14:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:14:32.808115 | orchestrator | 2026-01-03 01:14:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:35.861011 | orchestrator | 2026-01-03 01:14:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:14:35.863735 | orchestrator | 2026-01-03 01:14:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:14:35.863793 | orchestrator | 2026-01-03 01:14:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:38.919523 | orchestrator | 2026-01-03 01:14:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:14:38.921356 | orchestrator | 2026-01-03 01:14:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:14:38.921452 | orchestrator | 2026-01-03 01:14:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:41.974528 | orchestrator | 2026-01-03 01:14:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:14:41.976042 | orchestrator | 2026-01-03 01:14:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:14:41.976090 | orchestrator | 2026-01-03 01:14:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:45.031471 | orchestrator | 2026-01-03 01:14:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:14:45.033273 | orchestrator | 2026-01-03 01:14:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:14:45.033595 | orchestrator | 2026-01-03 01:14:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:48.088963 | orchestrator | 2026-01-03 01:14:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:14:48.092693 | orchestrator | 2026-01-03 01:14:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:14:48.092797 | orchestrator | 2026-01-03 01:14:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:51.134355 | orchestrator | 2026-01-03 01:14:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:14:51.135425 | orchestrator | 2026-01-03 01:14:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:14:51.135801 | orchestrator | 2026-01-03 01:14:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:54.182308 | orchestrator | 2026-01-03 01:14:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:14:54.183294 | orchestrator | 2026-01-03 01:14:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:14:54.183345 | orchestrator | 2026-01-03 01:14:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:14:57.229515 | orchestrator | 2026-01-03 01:14:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:14:57.231410 | orchestrator | 2026-01-03 01:14:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:14:57.231460 | orchestrator | 2026-01-03 01:14:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:00.283190 | orchestrator | 2026-01-03 01:15:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:15:00.284251 | orchestrator | 2026-01-03 01:15:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:15:00.284296 | orchestrator | 2026-01-03 01:15:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:03.332310 | orchestrator | 2026-01-03 01:15:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:15:03.335175 | orchestrator | 2026-01-03 01:15:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:15:03.335270 | orchestrator | 2026-01-03 01:15:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:06.386538 | orchestrator | 2026-01-03 01:15:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:15:06.389498 | orchestrator | 2026-01-03 01:15:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:15:06.389582 | orchestrator | 2026-01-03 01:15:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:09.443802 | orchestrator | 2026-01-03 01:15:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:15:09.445038 | orchestrator | 2026-01-03 01:15:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:15:09.445421 | orchestrator | 2026-01-03 01:15:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:12.489704 | orchestrator | 2026-01-03 01:15:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:15:12.491994 | orchestrator | 2026-01-03 01:15:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:15:12.492036 | orchestrator | 2026-01-03 01:15:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:15.540369 | orchestrator | 2026-01-03 01:15:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:15:15.542549 | orchestrator | 2026-01-03 01:15:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:15:15.542616 | orchestrator | 2026-01-03 01:15:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:18.585694 | orchestrator | 2026-01-03 01:15:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:15:18.586726 | orchestrator | 2026-01-03 01:15:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:15:18.586789 | orchestrator | 2026-01-03 01:15:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:21.636332 | orchestrator | 2026-01-03 01:15:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:15:21.638245 | orchestrator | 2026-01-03 01:15:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:15:21.638299 | orchestrator | 2026-01-03 01:15:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:24.690967 | orchestrator | 2026-01-03 01:15:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:15:24.693634 | orchestrator | 2026-01-03 01:15:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:15:24.693697 | orchestrator | 2026-01-03 01:15:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:27.744173 | orchestrator | 2026-01-03 01:15:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:15:27.746666 | orchestrator | 2026-01-03 01:15:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:15:27.746831 | orchestrator | 2026-01-03 01:15:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:30.793660 | orchestrator | 2026-01-03 01:15:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:15:30.794825 | orchestrator | 2026-01-03 01:15:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:15:30.794984 | orchestrator | 2026-01-03 01:15:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:33.838130 | orchestrator | 2026-01-03 01:15:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:15:33.839788 | orchestrator | 2026-01-03 01:15:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:15:33.839844 | orchestrator | 2026-01-03 01:15:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:36.887973 | orchestrator | 2026-01-03 01:15:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:15:36.889646 | orchestrator | 2026-01-03 01:15:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:15:36.889688 | orchestrator | 2026-01-03 01:15:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:39.942347 | orchestrator | 2026-01-03 01:15:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:15:39.944466 | orchestrator | 2026-01-03 01:15:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:15:39.944654 | orchestrator | 2026-01-03 01:15:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:42.995764 | orchestrator | 2026-01-03 01:15:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:15:42.997351 | orchestrator | 2026-01-03 01:15:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:15:42.997414 | orchestrator | 2026-01-03 01:15:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:46.051434 | orchestrator | 2026-01-03 01:15:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:15:46.053030 | orchestrator | 2026-01-03 01:15:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:15:46.053109 | orchestrator | 2026-01-03 01:15:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:49.099288 | orchestrator | 2026-01-03 01:15:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:15:49.101519 | orchestrator | 2026-01-03 01:15:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:15:49.101598 | orchestrator | 2026-01-03 01:15:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:52.153610 | orchestrator | 2026-01-03 01:15:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:15:52.156207 | orchestrator | 2026-01-03 01:15:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:15:52.156293 | orchestrator | 2026-01-03 01:15:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:55.202191 | orchestrator | 2026-01-03 01:15:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:15:55.203237 | orchestrator | 2026-01-03 01:15:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:15:55.203278 | orchestrator | 2026-01-03 01:15:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:15:58.252453 | orchestrator | 2026-01-03 01:15:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:15:58.254392 | orchestrator | 2026-01-03 01:15:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:15:58.254440 | orchestrator | 2026-01-03 01:15:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:01.302353 | orchestrator | 2026-01-03 01:16:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:16:01.302794 | orchestrator | 2026-01-03 01:16:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:16:01.302883 | orchestrator | 2026-01-03 01:16:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:04.357520 | orchestrator | 2026-01-03 01:16:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:16:04.360365 | orchestrator | 2026-01-03 01:16:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:16:04.360412 | orchestrator | 2026-01-03 01:16:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:07.406816 | orchestrator | 2026-01-03 01:16:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:16:07.408101 | orchestrator | 2026-01-03 01:16:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:16:07.408279 | orchestrator | 2026-01-03 01:16:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:10.457095 | orchestrator | 2026-01-03 01:16:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:16:10.459072 | orchestrator | 2026-01-03 01:16:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:16:10.459123 | orchestrator | 2026-01-03 01:16:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:13.507540 | orchestrator | 2026-01-03 01:16:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:16:13.510006 | orchestrator | 2026-01-03 01:16:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:16:13.510109 | orchestrator | 2026-01-03 01:16:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:16.555668 | orchestrator | 2026-01-03 01:16:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:16:16.557703 | orchestrator | 2026-01-03 01:16:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:16:16.557779 | orchestrator | 2026-01-03 01:16:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:19.603296 | orchestrator | 2026-01-03 01:16:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:16:19.604753 | orchestrator | 2026-01-03 01:16:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:16:19.604980 | orchestrator | 2026-01-03 01:16:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:22.655545 | orchestrator | 2026-01-03 01:16:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:16:22.656564 | orchestrator | 2026-01-03 01:16:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:16:22.656643 | orchestrator | 2026-01-03 01:16:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:25.709593 | orchestrator | 2026-01-03 01:16:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:16:25.711029 | orchestrator | 2026-01-03 01:16:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:16:25.711068 | orchestrator | 2026-01-03 01:16:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:28.756298 | orchestrator | 2026-01-03 01:16:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:16:28.758243 | orchestrator | 2026-01-03 01:16:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:16:28.758279 | orchestrator | 2026-01-03 01:16:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:31.803103 | orchestrator | 2026-01-03 01:16:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:16:31.806170 | orchestrator | 2026-01-03 01:16:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:16:31.806232 | orchestrator | 2026-01-03 01:16:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:34.847272 | orchestrator | 2026-01-03 01:16:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:16:34.850147 | orchestrator | 2026-01-03 01:16:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:16:34.850245 | orchestrator | 2026-01-03 01:16:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:37.899970 | orchestrator | 2026-01-03 01:16:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:16:37.901731 | orchestrator | 2026-01-03 01:16:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:16:37.901795 | orchestrator | 2026-01-03 01:16:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:40.956419 | orchestrator | 2026-01-03 01:16:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:16:40.958150 | orchestrator | 2026-01-03 01:16:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:16:40.958207 | orchestrator | 2026-01-03 01:16:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:44.011548 | orchestrator | 2026-01-03 01:16:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:16:44.012992 | orchestrator | 2026-01-03 01:16:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:16:44.013351 | orchestrator | 2026-01-03 01:16:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:47.065468 | orchestrator | 2026-01-03 01:16:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:16:47.069015 | orchestrator | 2026-01-03 01:16:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:16:47.069070 | orchestrator | 2026-01-03 01:16:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:50.126005 | orchestrator | 2026-01-03 01:16:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:16:50.128431 | orchestrator | 2026-01-03 01:16:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:16:50.128919 | orchestrator | 2026-01-03 01:16:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:53.171783 | orchestrator | 2026-01-03 01:16:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:16:53.172119 | orchestrator | 2026-01-03 01:16:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:16:53.172149 | orchestrator | 2026-01-03 01:16:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:56.224874 | orchestrator | 2026-01-03 01:16:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:16:56.227045 | orchestrator | 2026-01-03 01:16:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:16:56.227110 | orchestrator | 2026-01-03 01:16:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:16:59.275389 | orchestrator | 2026-01-03 01:16:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:16:59.277331 | orchestrator | 2026-01-03 01:16:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:16:59.277439 | orchestrator | 2026-01-03 01:16:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:02.329414 | orchestrator | 2026-01-03 01:17:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:17:02.335144 | orchestrator | 2026-01-03 01:17:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:17:02.335213 | orchestrator | 2026-01-03 01:17:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:05.382560 | orchestrator | 2026-01-03 01:17:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:17:05.383757 | orchestrator | 2026-01-03 01:17:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:17:05.384136 | orchestrator | 2026-01-03 01:17:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:08.432354 | orchestrator | 2026-01-03 01:17:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:17:08.433989 | orchestrator | 2026-01-03 01:17:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:17:08.434085 | orchestrator | 2026-01-03 01:17:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:11.488332 | orchestrator | 2026-01-03 01:17:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:17:11.489735 | orchestrator | 2026-01-03 01:17:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:17:11.489883 | orchestrator | 2026-01-03 01:17:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:14.533876 | orchestrator | 2026-01-03 01:17:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:17:14.536538 | orchestrator | 2026-01-03 01:17:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:17:14.536609 | orchestrator | 2026-01-03 01:17:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:17.586629 | orchestrator | 2026-01-03 01:17:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:17:17.589330 | orchestrator | 2026-01-03 01:17:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:17:17.589475 | orchestrator | 2026-01-03 01:17:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:20.643449 | orchestrator | 2026-01-03 01:17:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:17:20.646130 | orchestrator | 2026-01-03 01:17:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:17:20.646215 | orchestrator | 2026-01-03 01:17:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:23.696952 | orchestrator | 2026-01-03 01:17:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:17:23.698391 | orchestrator | 2026-01-03 01:17:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:17:23.698444 | orchestrator | 2026-01-03 01:17:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:26.742816 | orchestrator | 2026-01-03 01:17:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:17:26.745494 | orchestrator | 2026-01-03 01:17:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:17:26.745567 | orchestrator | 2026-01-03 01:17:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:29.791341 | orchestrator | 2026-01-03 01:17:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:17:29.792189 | orchestrator | 2026-01-03 01:17:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:17:29.792218 | orchestrator | 2026-01-03 01:17:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:32.845959 | orchestrator | 2026-01-03 01:17:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:17:32.847198 | orchestrator | 2026-01-03 01:17:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:17:32.847895 | orchestrator | 2026-01-03 01:17:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:35.893499 | orchestrator | 2026-01-03 01:17:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:17:35.896307 | orchestrator | 2026-01-03 01:17:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:17:35.896382 | orchestrator | 2026-01-03 01:17:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:38.935888 | orchestrator | 2026-01-03 01:17:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:17:38.936950 | orchestrator | 2026-01-03 01:17:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:17:38.936979 | orchestrator | 2026-01-03 01:17:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:41.988762 | orchestrator | 2026-01-03 01:17:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:17:41.990325 | orchestrator | 2026-01-03 01:17:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:17:41.990387 | orchestrator | 2026-01-03 01:17:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:45.037391 | orchestrator | 2026-01-03 01:17:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:17:45.039120 | orchestrator | 2026-01-03 01:17:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:17:45.039194 | orchestrator | 2026-01-03 01:17:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:48.084936 | orchestrator | 2026-01-03 01:17:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:17:48.086561 | orchestrator | 2026-01-03 01:17:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:17:48.086610 | orchestrator | 2026-01-03 01:17:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:51.138339 | orchestrator | 2026-01-03 01:17:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:17:51.139349 | orchestrator | 2026-01-03 01:17:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:17:51.139374 | orchestrator | 2026-01-03 01:17:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:54.176695 | orchestrator | 2026-01-03 01:17:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:17:54.179852 | orchestrator | 2026-01-03 01:17:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:17:54.179917 | orchestrator | 2026-01-03 01:17:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:17:57.230588 | orchestrator | 2026-01-03 01:17:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:17:57.232031 | orchestrator | 2026-01-03 01:17:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:17:57.232126 | orchestrator | 2026-01-03 01:17:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:00.280962 | orchestrator | 2026-01-03 01:18:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:18:00.281233 | orchestrator | 2026-01-03 01:18:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:18:00.281252 | orchestrator | 2026-01-03 01:18:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:03.337945 | orchestrator | 2026-01-03 01:18:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:18:03.339605 | orchestrator | 2026-01-03 01:18:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:18:03.339673 | orchestrator | 2026-01-03 01:18:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:06.390815 | orchestrator | 2026-01-03 01:18:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:18:06.392545 | orchestrator | 2026-01-03 01:18:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:18:06.392622 | orchestrator | 2026-01-03 01:18:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:09.439531 | orchestrator | 2026-01-03 01:18:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:18:09.441363 | orchestrator | 2026-01-03 01:18:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:18:09.441590 | orchestrator | 2026-01-03 01:18:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:12.493979 | orchestrator | 2026-01-03 01:18:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:18:12.495674 | orchestrator | 2026-01-03 01:18:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:18:12.495717 | orchestrator | 2026-01-03 01:18:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:15.550421 | orchestrator | 2026-01-03 01:18:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:18:15.552986 | orchestrator | 2026-01-03 01:18:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:18:15.553062 | orchestrator | 2026-01-03 01:18:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:18.619300 | orchestrator | 2026-01-03 01:18:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:18:18.619374 | orchestrator | 2026-01-03 01:18:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:18:18.619380 | orchestrator | 2026-01-03 01:18:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:21.657628 | orchestrator | 2026-01-03 01:18:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:18:21.659892 | orchestrator | 2026-01-03 01:18:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:18:21.659980 | orchestrator | 2026-01-03 01:18:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:24.707215 | orchestrator | 2026-01-03 01:18:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:18:24.707907 | orchestrator | 2026-01-03 01:18:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:18:24.707950 | orchestrator | 2026-01-03 01:18:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:27.765376 | orchestrator | 2026-01-03 01:18:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:18:27.766666 | orchestrator | 2026-01-03 01:18:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:18:27.766719 | orchestrator | 2026-01-03 01:18:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:30.815229 | orchestrator | 2026-01-03 01:18:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:18:30.817250 | orchestrator | 2026-01-03 01:18:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:18:30.817314 | orchestrator | 2026-01-03 01:18:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:33.869942 | orchestrator | 2026-01-03 01:18:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:18:33.871138 | orchestrator | 2026-01-03 01:18:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:18:33.871194 | orchestrator | 2026-01-03 01:18:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:36.922741 | orchestrator | 2026-01-03 01:18:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:18:36.925218 | orchestrator | 2026-01-03 01:18:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:18:36.926314 | orchestrator | 2026-01-03 01:18:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:39.975269 | orchestrator | 2026-01-03 01:18:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:18:39.977038 | orchestrator | 2026-01-03 01:18:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:18:39.977186 | orchestrator | 2026-01-03 01:18:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:43.028344 | orchestrator | 2026-01-03 01:18:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:18:43.030054 | orchestrator | 2026-01-03 01:18:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:18:43.030120 | orchestrator | 2026-01-03 01:18:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:46.075059 | orchestrator | 2026-01-03 01:18:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:18:46.076655 | orchestrator | 2026-01-03 01:18:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:18:46.076710 | orchestrator | 2026-01-03 01:18:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:49.127664 | orchestrator | 2026-01-03 01:18:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:18:49.130350 | orchestrator | 2026-01-03 01:18:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:18:49.130909 | orchestrator | 2026-01-03 01:18:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:52.182648 | orchestrator | 2026-01-03 01:18:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:18:52.185435 | orchestrator | 2026-01-03 01:18:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:18:52.185523 | orchestrator | 2026-01-03 01:18:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:55.242830 | orchestrator | 2026-01-03 01:18:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:18:55.244250 | orchestrator | 2026-01-03 01:18:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:18:55.244359 | orchestrator | 2026-01-03 01:18:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:18:58.300636 | orchestrator | 2026-01-03 01:18:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:18:58.303279 | orchestrator | 2026-01-03 01:18:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:18:58.303381 | orchestrator | 2026-01-03 01:18:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:01.357944 | orchestrator | 2026-01-03 01:19:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:19:01.359489 | orchestrator | 2026-01-03 01:19:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:19:01.359530 | orchestrator | 2026-01-03 01:19:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:04.412104 | orchestrator | 2026-01-03 01:19:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:19:04.414127 | orchestrator | 2026-01-03 01:19:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:19:04.414192 | orchestrator | 2026-01-03 01:19:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:07.462165 | orchestrator | 2026-01-03 01:19:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:19:07.465268 | orchestrator | 2026-01-03 01:19:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:19:07.465397 | orchestrator | 2026-01-03 01:19:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:10.518272 | orchestrator | 2026-01-03 01:19:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:19:10.520953 | orchestrator | 2026-01-03 01:19:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:19:10.521026 | orchestrator | 2026-01-03 01:19:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:13.575159 | orchestrator | 2026-01-03 01:19:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:19:13.578267 | orchestrator | 2026-01-03 01:19:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:19:13.578339 | orchestrator | 2026-01-03 01:19:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:16.627215 | orchestrator | 2026-01-03 01:19:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:19:16.628888 | orchestrator | 2026-01-03 01:19:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:19:16.629013 | orchestrator | 2026-01-03 01:19:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:19.675057 | orchestrator | 2026-01-03 01:19:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:19:19.676993 | orchestrator | 2026-01-03 01:19:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:19:19.677050 | orchestrator | 2026-01-03 01:19:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:22.724481 | orchestrator | 2026-01-03 01:19:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:19:22.727108 | orchestrator | 2026-01-03 01:19:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:19:22.727167 | orchestrator | 2026-01-03 01:19:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:25.775334 | orchestrator | 2026-01-03 01:19:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:19:25.776226 | orchestrator | 2026-01-03 01:19:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:19:25.776641 | orchestrator | 2026-01-03 01:19:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:28.827340 | orchestrator | 2026-01-03 01:19:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:19:28.828317 | orchestrator | 2026-01-03 01:19:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:19:28.828355 | orchestrator | 2026-01-03 01:19:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:31.882260 | orchestrator | 2026-01-03 01:19:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:19:31.884182 | orchestrator | 2026-01-03 01:19:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:19:31.884260 | orchestrator | 2026-01-03 01:19:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:34.927678 | orchestrator | 2026-01-03 01:19:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:19:34.929138 | orchestrator | 2026-01-03 01:19:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:19:34.929218 | orchestrator | 2026-01-03 01:19:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:37.978606 | orchestrator | 2026-01-03 01:19:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:19:37.980425 | orchestrator | 2026-01-03 01:19:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:19:37.980474 | orchestrator | 2026-01-03 01:19:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:41.044204 | orchestrator | 2026-01-03 01:19:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:19:41.046048 | orchestrator | 2026-01-03 01:19:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:19:41.046140 | orchestrator | 2026-01-03 01:19:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:44.096589 | orchestrator | 2026-01-03 01:19:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:19:44.102206 | orchestrator | 2026-01-03 01:19:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:19:44.102272 | orchestrator | 2026-01-03 01:19:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:47.145409 | orchestrator | 2026-01-03 01:19:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:19:47.146687 | orchestrator | 2026-01-03 01:19:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:19:47.146742 | orchestrator | 2026-01-03 01:19:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:50.194230 | orchestrator | 2026-01-03 01:19:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:19:50.194302 | orchestrator | 2026-01-03 01:19:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:19:50.194374 | orchestrator | 2026-01-03 01:19:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:53.241683 | orchestrator | 2026-01-03 01:19:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:19:53.243247 | orchestrator | 2026-01-03 01:19:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:19:53.243320 | orchestrator | 2026-01-03 01:19:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:56.289788 | orchestrator | 2026-01-03 01:19:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:19:56.290844 | orchestrator | 2026-01-03 01:19:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:19:56.290880 | orchestrator | 2026-01-03 01:19:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:19:59.339251 | orchestrator | 2026-01-03 01:19:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:19:59.340672 | orchestrator | 2026-01-03 01:19:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:19:59.340754 | orchestrator | 2026-01-03 01:19:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:02.392792 | orchestrator | 2026-01-03 01:20:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:20:02.394892 | orchestrator | 2026-01-03 01:20:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:20:02.394929 | orchestrator | 2026-01-03 01:20:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:05.436130 | orchestrator | 2026-01-03 01:20:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:20:05.436734 | orchestrator | 2026-01-03 01:20:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:20:05.436768 | orchestrator | 2026-01-03 01:20:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:08.481754 | orchestrator | 2026-01-03 01:20:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:20:08.483107 | orchestrator | 2026-01-03 01:20:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:20:08.483310 | orchestrator | 2026-01-03 01:20:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:11.534894 | orchestrator | 2026-01-03 01:20:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:20:11.536301 | orchestrator | 2026-01-03 01:20:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:20:11.536347 | orchestrator | 2026-01-03 01:20:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:14.582224 | orchestrator | 2026-01-03 01:20:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:20:14.584889 | orchestrator | 2026-01-03 01:20:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:20:14.584943 | orchestrator | 2026-01-03 01:20:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:17.642182 | orchestrator | 2026-01-03 01:20:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:20:17.645371 | orchestrator | 2026-01-03 01:20:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:20:17.646146 | orchestrator | 2026-01-03 01:20:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:20.698441 | orchestrator | 2026-01-03 01:20:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:20:20.702286 | orchestrator | 2026-01-03 01:20:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:20:20.702376 | orchestrator | 2026-01-03 01:20:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:23.749110 | orchestrator | 2026-01-03 01:20:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:20:23.750279 | orchestrator | 2026-01-03 01:20:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:20:23.750464 | orchestrator | 2026-01-03 01:20:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:26.797802 | orchestrator | 2026-01-03 01:20:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:20:26.798973 | orchestrator | 2026-01-03 01:20:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:20:26.799037 | orchestrator | 2026-01-03 01:20:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:29.849377 | orchestrator | 2026-01-03 01:20:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:20:29.851361 | orchestrator | 2026-01-03 01:20:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:20:29.851933 | orchestrator | 2026-01-03 01:20:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:32.901926 | orchestrator | 2026-01-03 01:20:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:20:32.903517 | orchestrator | 2026-01-03 01:20:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:20:32.903571 | orchestrator | 2026-01-03 01:20:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:35.955578 | orchestrator | 2026-01-03 01:20:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:20:35.956867 | orchestrator | 2026-01-03 01:20:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:20:35.956909 | orchestrator | 2026-01-03 01:20:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:39.008595 | orchestrator | 2026-01-03 01:20:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:20:39.008727 | orchestrator | 2026-01-03 01:20:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:20:39.008739 | orchestrator | 2026-01-03 01:20:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:42.061044 | orchestrator | 2026-01-03 01:20:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:20:42.062683 | orchestrator | 2026-01-03 01:20:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:20:42.062805 | orchestrator | 2026-01-03 01:20:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:45.108020 | orchestrator | 2026-01-03 01:20:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:20:45.110405 | orchestrator | 2026-01-03 01:20:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:20:45.110468 | orchestrator | 2026-01-03 01:20:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:48.156848 | orchestrator | 2026-01-03 01:20:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:20:48.159600 | orchestrator | 2026-01-03 01:20:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:20:48.159721 | orchestrator | 2026-01-03 01:20:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:51.206374 | orchestrator | 2026-01-03 01:20:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:20:51.209686 | orchestrator | 2026-01-03 01:20:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:20:51.209759 | orchestrator | 2026-01-03 01:20:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:54.255063 | orchestrator | 2026-01-03 01:20:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:20:54.257292 | orchestrator | 2026-01-03 01:20:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:20:54.257357 | orchestrator | 2026-01-03 01:20:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:20:57.299349 | orchestrator | 2026-01-03 01:20:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:20:57.302318 | orchestrator | 2026-01-03 01:20:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:20:57.302699 | orchestrator | 2026-01-03 01:20:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:00.344609 | orchestrator | 2026-01-03 01:21:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:21:00.346354 | orchestrator | 2026-01-03 01:21:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:21:00.346410 | orchestrator | 2026-01-03 01:21:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:03.391612 | orchestrator | 2026-01-03 01:21:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:21:03.393929 | orchestrator | 2026-01-03 01:21:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:21:03.394009 | orchestrator | 2026-01-03 01:21:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:06.437909 | orchestrator | 2026-01-03 01:21:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:21:06.439216 | orchestrator | 2026-01-03 01:21:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:21:06.439288 | orchestrator | 2026-01-03 01:21:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:09.484285 | orchestrator | 2026-01-03 01:21:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:21:09.485837 | orchestrator | 2026-01-03 01:21:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:21:09.485883 | orchestrator | 2026-01-03 01:21:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:12.538196 | orchestrator | 2026-01-03 01:21:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:21:12.540723 | orchestrator | 2026-01-03 01:21:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:21:12.540844 | orchestrator | 2026-01-03 01:21:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:15.590421 | orchestrator | 2026-01-03 01:21:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:21:15.592118 | orchestrator | 2026-01-03 01:21:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:21:15.592268 | orchestrator | 2026-01-03 01:21:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:18.647688 | orchestrator | 2026-01-03 01:21:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:21:18.651994 | orchestrator | 2026-01-03 01:21:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:21:18.652088 | orchestrator | 2026-01-03 01:21:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:21.703182 | orchestrator | 2026-01-03 01:21:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:21:21.706134 | orchestrator | 2026-01-03 01:21:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:21:21.706220 | orchestrator | 2026-01-03 01:21:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:24.754357 | orchestrator | 2026-01-03 01:21:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:21:24.756269 | orchestrator | 2026-01-03 01:21:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:21:24.756311 | orchestrator | 2026-01-03 01:21:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:27.807210 | orchestrator | 2026-01-03 01:21:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:21:27.809201 | orchestrator | 2026-01-03 01:21:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:21:27.809256 | orchestrator | 2026-01-03 01:21:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:30.859995 | orchestrator | 2026-01-03 01:21:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:21:30.861860 | orchestrator | 2026-01-03 01:21:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:21:30.861900 | orchestrator | 2026-01-03 01:21:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:33.915873 | orchestrator | 2026-01-03 01:21:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:21:33.919235 | orchestrator | 2026-01-03 01:21:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:21:33.919293 | orchestrator | 2026-01-03 01:21:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:36.972469 | orchestrator | 2026-01-03 01:21:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:21:36.973845 | orchestrator | 2026-01-03 01:21:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:21:36.974081 | orchestrator | 2026-01-03 01:21:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:40.030085 | orchestrator | 2026-01-03 01:21:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:21:40.032919 | orchestrator | 2026-01-03 01:21:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:21:40.033597 | orchestrator | 2026-01-03 01:21:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:43.077754 | orchestrator | 2026-01-03 01:21:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:21:43.080164 | orchestrator | 2026-01-03 01:21:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:21:43.080234 | orchestrator | 2026-01-03 01:21:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:46.123566 | orchestrator | 2026-01-03 01:21:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:21:46.124768 | orchestrator | 2026-01-03 01:21:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:21:46.124798 | orchestrator | 2026-01-03 01:21:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:49.167989 | orchestrator | 2026-01-03 01:21:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:21:49.171064 | orchestrator | 2026-01-03 01:21:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:21:49.171164 | orchestrator | 2026-01-03 01:21:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:52.225588 | orchestrator | 2026-01-03 01:21:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:21:52.227874 | orchestrator | 2026-01-03 01:21:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:21:52.228005 | orchestrator | 2026-01-03 01:21:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:55.273060 | orchestrator | 2026-01-03 01:21:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:21:55.274872 | orchestrator | 2026-01-03 01:21:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:21:55.274920 | orchestrator | 2026-01-03 01:21:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:21:58.323222 | orchestrator | 2026-01-03 01:21:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:21:58.324407 | orchestrator | 2026-01-03 01:21:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:21:58.324448 | orchestrator | 2026-01-03 01:21:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:01.373002 | orchestrator | 2026-01-03 01:22:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:22:01.375877 | orchestrator | 2026-01-03 01:22:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:22:01.375986 | orchestrator | 2026-01-03 01:22:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:04.420371 | orchestrator | 2026-01-03 01:22:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:22:04.422642 | orchestrator | 2026-01-03 01:22:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:22:04.422694 | orchestrator | 2026-01-03 01:22:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:07.472190 | orchestrator | 2026-01-03 01:22:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:22:07.474424 | orchestrator | 2026-01-03 01:22:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:22:07.474645 | orchestrator | 2026-01-03 01:22:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:10.519688 | orchestrator | 2026-01-03 01:22:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:22:10.521058 | orchestrator | 2026-01-03 01:22:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:22:10.521120 | orchestrator | 2026-01-03 01:22:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:13.569336 | orchestrator | 2026-01-03 01:22:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:22:13.570951 | orchestrator | 2026-01-03 01:22:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:22:13.571001 | orchestrator | 2026-01-03 01:22:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:16.613684 | orchestrator | 2026-01-03 01:22:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:22:16.615300 | orchestrator | 2026-01-03 01:22:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:22:16.615351 | orchestrator | 2026-01-03 01:22:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:19.662143 | orchestrator | 2026-01-03 01:22:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:22:19.664405 | orchestrator | 2026-01-03 01:22:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:22:19.664545 | orchestrator | 2026-01-03 01:22:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:22.710503 | orchestrator | 2026-01-03 01:22:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:22:22.711770 | orchestrator | 2026-01-03 01:22:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:22:22.712135 | orchestrator | 2026-01-03 01:22:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:25.752476 | orchestrator | 2026-01-03 01:22:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:22:25.754531 | orchestrator | 2026-01-03 01:22:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:22:25.754616 | orchestrator | 2026-01-03 01:22:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:28.805522 | orchestrator | 2026-01-03 01:22:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:22:28.807641 | orchestrator | 2026-01-03 01:22:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:22:28.807717 | orchestrator | 2026-01-03 01:22:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:31.859452 | orchestrator | 2026-01-03 01:22:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:22:31.861980 | orchestrator | 2026-01-03 01:22:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:22:31.862295 | orchestrator | 2026-01-03 01:22:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:34.920472 | orchestrator | 2026-01-03 01:22:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:22:34.921935 | orchestrator | 2026-01-03 01:22:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:22:34.921990 | orchestrator | 2026-01-03 01:22:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:37.977327 | orchestrator | 2026-01-03 01:22:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:22:37.979002 | orchestrator | 2026-01-03 01:22:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:22:37.979095 | orchestrator | 2026-01-03 01:22:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:41.030435 | orchestrator | 2026-01-03 01:22:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:22:41.031857 | orchestrator | 2026-01-03 01:22:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:22:41.031913 | orchestrator | 2026-01-03 01:22:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:44.082306 | orchestrator | 2026-01-03 01:22:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:22:44.083517 | orchestrator | 2026-01-03 01:22:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:22:44.083747 | orchestrator | 2026-01-03 01:22:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:47.132150 | orchestrator | 2026-01-03 01:22:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:22:47.133473 | orchestrator | 2026-01-03 01:22:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:22:47.133709 | orchestrator | 2026-01-03 01:22:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:50.172280 | orchestrator | 2026-01-03 01:22:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:22:50.174432 | orchestrator | 2026-01-03 01:22:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:22:50.174533 | orchestrator | 2026-01-03 01:22:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:53.219408 | orchestrator | 2026-01-03 01:22:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:22:53.221509 | orchestrator | 2026-01-03 01:22:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:22:53.221789 | orchestrator | 2026-01-03 01:22:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:56.271710 | orchestrator | 2026-01-03 01:22:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:22:56.274916 | orchestrator | 2026-01-03 01:22:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:22:56.274994 | orchestrator | 2026-01-03 01:22:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:22:59.321080 | orchestrator | 2026-01-03 01:22:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:22:59.323485 | orchestrator | 2026-01-03 01:22:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:22:59.323591 | orchestrator | 2026-01-03 01:22:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:02.372285 | orchestrator | 2026-01-03 01:23:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:23:02.374752 | orchestrator | 2026-01-03 01:23:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:23:02.374833 | orchestrator | 2026-01-03 01:23:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:05.422084 | orchestrator | 2026-01-03 01:23:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:23:05.424773 | orchestrator | 2026-01-03 01:23:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:23:05.424838 | orchestrator | 2026-01-03 01:23:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:08.475024 | orchestrator | 2026-01-03 01:23:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:23:08.478004 | orchestrator | 2026-01-03 01:23:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:23:08.478118 | orchestrator | 2026-01-03 01:23:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:11.520954 | orchestrator | 2026-01-03 01:23:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:23:11.522684 | orchestrator | 2026-01-03 01:23:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:23:11.522921 | orchestrator | 2026-01-03 01:23:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:14.568588 | orchestrator | 2026-01-03 01:23:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:23:14.570837 | orchestrator | 2026-01-03 01:23:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:23:14.570925 | orchestrator | 2026-01-03 01:23:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:17.615732 | orchestrator | 2026-01-03 01:23:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:23:17.617340 | orchestrator | 2026-01-03 01:23:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:23:17.617424 | orchestrator | 2026-01-03 01:23:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:20.661344 | orchestrator | 2026-01-03 01:23:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:23:20.662620 | orchestrator | 2026-01-03 01:23:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:23:20.662687 | orchestrator | 2026-01-03 01:23:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:23.715178 | orchestrator | 2026-01-03 01:23:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:23:23.717610 | orchestrator | 2026-01-03 01:23:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:23:23.717680 | orchestrator | 2026-01-03 01:23:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:26.775458 | orchestrator | 2026-01-03 01:23:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:23:26.777893 | orchestrator | 2026-01-03 01:23:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:23:26.777978 | orchestrator | 2026-01-03 01:23:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:29.831313 | orchestrator | 2026-01-03 01:23:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:23:29.831569 | orchestrator | 2026-01-03 01:23:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:23:29.831863 | orchestrator | 2026-01-03 01:23:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:32.875930 | orchestrator | 2026-01-03 01:23:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:23:32.877369 | orchestrator | 2026-01-03 01:23:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:23:32.878142 | orchestrator | 2026-01-03 01:23:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:35.924983 | orchestrator | 2026-01-03 01:23:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:23:35.927476 | orchestrator | 2026-01-03 01:23:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:23:35.927593 | orchestrator | 2026-01-03 01:23:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:38.978149 | orchestrator | 2026-01-03 01:23:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:23:38.980006 | orchestrator | 2026-01-03 01:23:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:23:38.980064 | orchestrator | 2026-01-03 01:23:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:42.030882 | orchestrator | 2026-01-03 01:23:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:23:42.032633 | orchestrator | 2026-01-03 01:23:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:23:42.032701 | orchestrator | 2026-01-03 01:23:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:45.071054 | orchestrator | 2026-01-03 01:23:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:23:45.072895 | orchestrator | 2026-01-03 01:23:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:23:45.072957 | orchestrator | 2026-01-03 01:23:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:48.120886 | orchestrator | 2026-01-03 01:23:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:23:48.122879 | orchestrator | 2026-01-03 01:23:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:23:48.122950 | orchestrator | 2026-01-03 01:23:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:51.162136 | orchestrator | 2026-01-03 01:23:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:23:51.164641 | orchestrator | 2026-01-03 01:23:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:23:51.164725 | orchestrator | 2026-01-03 01:23:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:54.212029 | orchestrator | 2026-01-03 01:23:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:23:54.212846 | orchestrator | 2026-01-03 01:23:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:23:54.212923 | orchestrator | 2026-01-03 01:23:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:23:57.260594 | orchestrator | 2026-01-03 01:23:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:23:57.262317 | orchestrator | 2026-01-03 01:23:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:23:57.262358 | orchestrator | 2026-01-03 01:23:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:00.306588 | orchestrator | 2026-01-03 01:24:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:24:00.307798 | orchestrator | 2026-01-03 01:24:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:24:00.307857 | orchestrator | 2026-01-03 01:24:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:03.354153 | orchestrator | 2026-01-03 01:24:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:24:03.355352 | orchestrator | 2026-01-03 01:24:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:24:03.355392 | orchestrator | 2026-01-03 01:24:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:06.400618 | orchestrator | 2026-01-03 01:24:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:24:06.402539 | orchestrator | 2026-01-03 01:24:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:24:06.402612 | orchestrator | 2026-01-03 01:24:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:09.446325 | orchestrator | 2026-01-03 01:24:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:24:09.448154 | orchestrator | 2026-01-03 01:24:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:24:09.448258 | orchestrator | 2026-01-03 01:24:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:12.497429 | orchestrator | 2026-01-03 01:24:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:24:12.499556 | orchestrator | 2026-01-03 01:24:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:24:12.499626 | orchestrator | 2026-01-03 01:24:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:15.546915 | orchestrator | 2026-01-03 01:24:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:24:15.548569 | orchestrator | 2026-01-03 01:24:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:24:15.548618 | orchestrator | 2026-01-03 01:24:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:18.597258 | orchestrator | 2026-01-03 01:24:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:24:18.601092 | orchestrator | 2026-01-03 01:24:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:24:18.601170 | orchestrator | 2026-01-03 01:24:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:21.649353 | orchestrator | 2026-01-03 01:24:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:24:21.651851 | orchestrator | 2026-01-03 01:24:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:24:21.651933 | orchestrator | 2026-01-03 01:24:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:24.696288 | orchestrator | 2026-01-03 01:24:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:24:24.698282 | orchestrator | 2026-01-03 01:24:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:24:24.698331 | orchestrator | 2026-01-03 01:24:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:27.752004 | orchestrator | 2026-01-03 01:24:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:24:27.754780 | orchestrator | 2026-01-03 01:24:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:24:27.754909 | orchestrator | 2026-01-03 01:24:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:30.800291 | orchestrator | 2026-01-03 01:24:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:24:30.802583 | orchestrator | 2026-01-03 01:24:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:24:30.802692 | orchestrator | 2026-01-03 01:24:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:33.851563 | orchestrator | 2026-01-03 01:24:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:24:33.853068 | orchestrator | 2026-01-03 01:24:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:24:33.853121 | orchestrator | 2026-01-03 01:24:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:36.899824 | orchestrator | 2026-01-03 01:24:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:24:36.901691 | orchestrator | 2026-01-03 01:24:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:24:36.901753 | orchestrator | 2026-01-03 01:24:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:39.949337 | orchestrator | 2026-01-03 01:24:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:24:39.951817 | orchestrator | 2026-01-03 01:24:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:24:39.952011 | orchestrator | 2026-01-03 01:24:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:43.000713 | orchestrator | 2026-01-03 01:24:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:24:43.002375 | orchestrator | 2026-01-03 01:24:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:24:43.002462 | orchestrator | 2026-01-03 01:24:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:46.058059 | orchestrator | 2026-01-03 01:24:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:24:46.059888 | orchestrator | 2026-01-03 01:24:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:24:46.059948 | orchestrator | 2026-01-03 01:24:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:49.107798 | orchestrator | 2026-01-03 01:24:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:24:49.110529 | orchestrator | 2026-01-03 01:24:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:24:49.110608 | orchestrator | 2026-01-03 01:24:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:52.158239 | orchestrator | 2026-01-03 01:24:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:24:52.160012 | orchestrator | 2026-01-03 01:24:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:24:52.160067 | orchestrator | 2026-01-03 01:24:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:55.205772 | orchestrator | 2026-01-03 01:24:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:24:55.206826 | orchestrator | 2026-01-03 01:24:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:24:55.206909 | orchestrator | 2026-01-03 01:24:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:24:58.258207 | orchestrator | 2026-01-03 01:24:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:24:58.259705 | orchestrator | 2026-01-03 01:24:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:24:58.259740 | orchestrator | 2026-01-03 01:24:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:01.303493 | orchestrator | 2026-01-03 01:25:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:25:01.305345 | orchestrator | 2026-01-03 01:25:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:25:01.305796 | orchestrator | 2026-01-03 01:25:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:04.353813 | orchestrator | 2026-01-03 01:25:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:25:04.355565 | orchestrator | 2026-01-03 01:25:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:25:04.355634 | orchestrator | 2026-01-03 01:25:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:07.404306 | orchestrator | 2026-01-03 01:25:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:25:07.406094 | orchestrator | 2026-01-03 01:25:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:25:07.406212 | orchestrator | 2026-01-03 01:25:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:10.450272 | orchestrator | 2026-01-03 01:25:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:25:10.452590 | orchestrator | 2026-01-03 01:25:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:25:10.452656 | orchestrator | 2026-01-03 01:25:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:13.501376 | orchestrator | 2026-01-03 01:25:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:25:13.502414 | orchestrator | 2026-01-03 01:25:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:25:13.502510 | orchestrator | 2026-01-03 01:25:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:16.541113 | orchestrator | 2026-01-03 01:25:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:25:16.543738 | orchestrator | 2026-01-03 01:25:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:25:16.543784 | orchestrator | 2026-01-03 01:25:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:19.585984 | orchestrator | 2026-01-03 01:25:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:25:19.586111 | orchestrator | 2026-01-03 01:25:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:25:19.586122 | orchestrator | 2026-01-03 01:25:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:22.633831 | orchestrator | 2026-01-03 01:25:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:25:22.636005 | orchestrator | 2026-01-03 01:25:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:25:22.636062 | orchestrator | 2026-01-03 01:25:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:25.682131 | orchestrator | 2026-01-03 01:25:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:25:25.684021 | orchestrator | 2026-01-03 01:25:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:25:25.684091 | orchestrator | 2026-01-03 01:25:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:28.736713 | orchestrator | 2026-01-03 01:25:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:25:28.738846 | orchestrator | 2026-01-03 01:25:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:25:28.738963 | orchestrator | 2026-01-03 01:25:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:31.779065 | orchestrator | 2026-01-03 01:25:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:25:31.780221 | orchestrator | 2026-01-03 01:25:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:25:31.780589 | orchestrator | 2026-01-03 01:25:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:34.826872 | orchestrator | 2026-01-03 01:25:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:25:34.828890 | orchestrator | 2026-01-03 01:25:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:25:34.828946 | orchestrator | 2026-01-03 01:25:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:37.881255 | orchestrator | 2026-01-03 01:25:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:25:37.883043 | orchestrator | 2026-01-03 01:25:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:25:37.883115 | orchestrator | 2026-01-03 01:25:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:40.934151 | orchestrator | 2026-01-03 01:25:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:25:40.935652 | orchestrator | 2026-01-03 01:25:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:25:40.935701 | orchestrator | 2026-01-03 01:25:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:43.988711 | orchestrator | 2026-01-03 01:25:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:25:43.990123 | orchestrator | 2026-01-03 01:25:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:25:43.990174 | orchestrator | 2026-01-03 01:25:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:47.039266 | orchestrator | 2026-01-03 01:25:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:25:47.040625 | orchestrator | 2026-01-03 01:25:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:25:47.040748 | orchestrator | 2026-01-03 01:25:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:50.091848 | orchestrator | 2026-01-03 01:25:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:25:50.092292 | orchestrator | 2026-01-03 01:25:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:25:50.092331 | orchestrator | 2026-01-03 01:25:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:53.137840 | orchestrator | 2026-01-03 01:25:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:25:53.139574 | orchestrator | 2026-01-03 01:25:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:25:53.139716 | orchestrator | 2026-01-03 01:25:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:56.193491 | orchestrator | 2026-01-03 01:25:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:25:56.196347 | orchestrator | 2026-01-03 01:25:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:25:56.196592 | orchestrator | 2026-01-03 01:25:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:25:59.246956 | orchestrator | 2026-01-03 01:25:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:25:59.248361 | orchestrator | 2026-01-03 01:25:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:25:59.248484 | orchestrator | 2026-01-03 01:25:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:02.301235 | orchestrator | 2026-01-03 01:26:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:26:02.303358 | orchestrator | 2026-01-03 01:26:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:26:02.303477 | orchestrator | 2026-01-03 01:26:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:05.348863 | orchestrator | 2026-01-03 01:26:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:26:05.351080 | orchestrator | 2026-01-03 01:26:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:26:05.351165 | orchestrator | 2026-01-03 01:26:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:08.400736 | orchestrator | 2026-01-03 01:26:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:26:08.402584 | orchestrator | 2026-01-03 01:26:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:26:08.402654 | orchestrator | 2026-01-03 01:26:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:11.445698 | orchestrator | 2026-01-03 01:26:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:26:11.447697 | orchestrator | 2026-01-03 01:26:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:26:11.447750 | orchestrator | 2026-01-03 01:26:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:14.500239 | orchestrator | 2026-01-03 01:26:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:26:14.500909 | orchestrator | 2026-01-03 01:26:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:26:14.500967 | orchestrator | 2026-01-03 01:26:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:17.551989 | orchestrator | 2026-01-03 01:26:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:26:17.553610 | orchestrator | 2026-01-03 01:26:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:26:17.553655 | orchestrator | 2026-01-03 01:26:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:20.603747 | orchestrator | 2026-01-03 01:26:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:26:20.605601 | orchestrator | 2026-01-03 01:26:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:26:20.605653 | orchestrator | 2026-01-03 01:26:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:23.663937 | orchestrator | 2026-01-03 01:26:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:26:23.664831 | orchestrator | 2026-01-03 01:26:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:26:23.664864 | orchestrator | 2026-01-03 01:26:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:26.716912 | orchestrator | 2026-01-03 01:26:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:26:26.718729 | orchestrator | 2026-01-03 01:26:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:26:26.718783 | orchestrator | 2026-01-03 01:26:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:29.767623 | orchestrator | 2026-01-03 01:26:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:26:29.769771 | orchestrator | 2026-01-03 01:26:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:26:29.769861 | orchestrator | 2026-01-03 01:26:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:32.815611 | orchestrator | 2026-01-03 01:26:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:26:32.816464 | orchestrator | 2026-01-03 01:26:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:26:32.816674 | orchestrator | 2026-01-03 01:26:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:35.864819 | orchestrator | 2026-01-03 01:26:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:26:35.867059 | orchestrator | 2026-01-03 01:26:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:26:35.867141 | orchestrator | 2026-01-03 01:26:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:38.918760 | orchestrator | 2026-01-03 01:26:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:26:38.920721 | orchestrator | 2026-01-03 01:26:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:26:38.920815 | orchestrator | 2026-01-03 01:26:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:41.971191 | orchestrator | 2026-01-03 01:26:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:26:41.973280 | orchestrator | 2026-01-03 01:26:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:26:41.973477 | orchestrator | 2026-01-03 01:26:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:45.025999 | orchestrator | 2026-01-03 01:26:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:26:45.026863 | orchestrator | 2026-01-03 01:26:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:26:45.026913 | orchestrator | 2026-01-03 01:26:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:48.075859 | orchestrator | 2026-01-03 01:26:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:26:48.077787 | orchestrator | 2026-01-03 01:26:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:26:48.077844 | orchestrator | 2026-01-03 01:26:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:51.118279 | orchestrator | 2026-01-03 01:26:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:26:51.120439 | orchestrator | 2026-01-03 01:26:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:26:51.120514 | orchestrator | 2026-01-03 01:26:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:54.169993 | orchestrator | 2026-01-03 01:26:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:26:54.171707 | orchestrator | 2026-01-03 01:26:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:26:54.171777 | orchestrator | 2026-01-03 01:26:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:26:57.224583 | orchestrator | 2026-01-03 01:26:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:26:57.226494 | orchestrator | 2026-01-03 01:26:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:26:57.226537 | orchestrator | 2026-01-03 01:26:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:00.279754 | orchestrator | 2026-01-03 01:27:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:27:00.280752 | orchestrator | 2026-01-03 01:27:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:27:00.280927 | orchestrator | 2026-01-03 01:27:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:03.331421 | orchestrator | 2026-01-03 01:27:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:27:03.333103 | orchestrator | 2026-01-03 01:27:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:27:03.333133 | orchestrator | 2026-01-03 01:27:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:06.384543 | orchestrator | 2026-01-03 01:27:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:27:06.387021 | orchestrator | 2026-01-03 01:27:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:27:06.387107 | orchestrator | 2026-01-03 01:27:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:09.437330 | orchestrator | 2026-01-03 01:27:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:27:09.438860 | orchestrator | 2026-01-03 01:27:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:27:09.438958 | orchestrator | 2026-01-03 01:27:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:12.483180 | orchestrator | 2026-01-03 01:27:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:27:12.485256 | orchestrator | 2026-01-03 01:27:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:27:12.485333 | orchestrator | 2026-01-03 01:27:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:15.528713 | orchestrator | 2026-01-03 01:27:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:27:15.529881 | orchestrator | 2026-01-03 01:27:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:27:15.529939 | orchestrator | 2026-01-03 01:27:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:18.575655 | orchestrator | 2026-01-03 01:27:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:27:18.578559 | orchestrator | 2026-01-03 01:27:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:27:18.578666 | orchestrator | 2026-01-03 01:27:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:21.624815 | orchestrator | 2026-01-03 01:27:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:27:21.627088 | orchestrator | 2026-01-03 01:27:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:27:21.627219 | orchestrator | 2026-01-03 01:27:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:24.678159 | orchestrator | 2026-01-03 01:27:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:27:24.680623 | orchestrator | 2026-01-03 01:27:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:27:24.680741 | orchestrator | 2026-01-03 01:27:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:27.733936 | orchestrator | 2026-01-03 01:27:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:27:27.735184 | orchestrator | 2026-01-03 01:27:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:27:27.735938 | orchestrator | 2026-01-03 01:27:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:30.788047 | orchestrator | 2026-01-03 01:27:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:27:30.789191 | orchestrator | 2026-01-03 01:27:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:27:30.789244 | orchestrator | 2026-01-03 01:27:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:33.843664 | orchestrator | 2026-01-03 01:27:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:27:33.845738 | orchestrator | 2026-01-03 01:27:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:27:33.845817 | orchestrator | 2026-01-03 01:27:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:36.895716 | orchestrator | 2026-01-03 01:27:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:27:36.899637 | orchestrator | 2026-01-03 01:27:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:27:36.899755 | orchestrator | 2026-01-03 01:27:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:39.951614 | orchestrator | 2026-01-03 01:27:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:27:39.954131 | orchestrator | 2026-01-03 01:27:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:27:39.954223 | orchestrator | 2026-01-03 01:27:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:43.003623 | orchestrator | 2026-01-03 01:27:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:27:43.004185 | orchestrator | 2026-01-03 01:27:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:27:43.004227 | orchestrator | 2026-01-03 01:27:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:46.052107 | orchestrator | 2026-01-03 01:27:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:27:46.054250 | orchestrator | 2026-01-03 01:27:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:27:46.054355 | orchestrator | 2026-01-03 01:27:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:49.097178 | orchestrator | 2026-01-03 01:27:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:27:49.097611 | orchestrator | 2026-01-03 01:27:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:27:49.097638 | orchestrator | 2026-01-03 01:27:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:52.144703 | orchestrator | 2026-01-03 01:27:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:27:52.145801 | orchestrator | 2026-01-03 01:27:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:27:52.145877 | orchestrator | 2026-01-03 01:27:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:55.201826 | orchestrator | 2026-01-03 01:27:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:27:55.203428 | orchestrator | 2026-01-03 01:27:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:27:55.203527 | orchestrator | 2026-01-03 01:27:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:27:58.245716 | orchestrator | 2026-01-03 01:27:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:27:58.246413 | orchestrator | 2026-01-03 01:27:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:27:58.246458 | orchestrator | 2026-01-03 01:27:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:01.294043 | orchestrator | 2026-01-03 01:28:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:28:01.295835 | orchestrator | 2026-01-03 01:28:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:28:01.295911 | orchestrator | 2026-01-03 01:28:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:04.346165 | orchestrator | 2026-01-03 01:28:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:28:04.347650 | orchestrator | 2026-01-03 01:28:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:28:04.347694 | orchestrator | 2026-01-03 01:28:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:07.393798 | orchestrator | 2026-01-03 01:28:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:28:07.395400 | orchestrator | 2026-01-03 01:28:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:28:07.395441 | orchestrator | 2026-01-03 01:28:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:10.442739 | orchestrator | 2026-01-03 01:28:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:28:10.444239 | orchestrator | 2026-01-03 01:28:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:28:10.444353 | orchestrator | 2026-01-03 01:28:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:13.492763 | orchestrator | 2026-01-03 01:28:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:28:13.494723 | orchestrator | 2026-01-03 01:28:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:28:13.494789 | orchestrator | 2026-01-03 01:28:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:16.542417 | orchestrator | 2026-01-03 01:28:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:28:16.544030 | orchestrator | 2026-01-03 01:28:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:28:16.544083 | orchestrator | 2026-01-03 01:28:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:19.594183 | orchestrator | 2026-01-03 01:28:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:28:19.596190 | orchestrator | 2026-01-03 01:28:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:28:19.596249 | orchestrator | 2026-01-03 01:28:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:22.647421 | orchestrator | 2026-01-03 01:28:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:28:22.649303 | orchestrator | 2026-01-03 01:28:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:28:22.649368 | orchestrator | 2026-01-03 01:28:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:25.691473 | orchestrator | 2026-01-03 01:28:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:28:25.692758 | orchestrator | 2026-01-03 01:28:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:28:25.692800 | orchestrator | 2026-01-03 01:28:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:28.741618 | orchestrator | 2026-01-03 01:28:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:28:28.744025 | orchestrator | 2026-01-03 01:28:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:28:28.744086 | orchestrator | 2026-01-03 01:28:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:31.797976 | orchestrator | 2026-01-03 01:28:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:28:31.799055 | orchestrator | 2026-01-03 01:28:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:28:31.799096 | orchestrator | 2026-01-03 01:28:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:34.843859 | orchestrator | 2026-01-03 01:28:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:28:34.846355 | orchestrator | 2026-01-03 01:28:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:28:34.846447 | orchestrator | 2026-01-03 01:28:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:37.893777 | orchestrator | 2026-01-03 01:28:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:28:37.895372 | orchestrator | 2026-01-03 01:28:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:28:37.895423 | orchestrator | 2026-01-03 01:28:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:40.950421 | orchestrator | 2026-01-03 01:28:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:28:40.952571 | orchestrator | 2026-01-03 01:28:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:28:40.952666 | orchestrator | 2026-01-03 01:28:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:44.001066 | orchestrator | 2026-01-03 01:28:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:28:44.006498 | orchestrator | 2026-01-03 01:28:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:28:44.006576 | orchestrator | 2026-01-03 01:28:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:47.054353 | orchestrator | 2026-01-03 01:28:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:28:47.055357 | orchestrator | 2026-01-03 01:28:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:28:47.055411 | orchestrator | 2026-01-03 01:28:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:50.118562 | orchestrator | 2026-01-03 01:28:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:28:50.120346 | orchestrator | 2026-01-03 01:28:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:28:50.120433 | orchestrator | 2026-01-03 01:28:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:53.171101 | orchestrator | 2026-01-03 01:28:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:28:53.173132 | orchestrator | 2026-01-03 01:28:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:28:53.173227 | orchestrator | 2026-01-03 01:28:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:56.218935 | orchestrator | 2026-01-03 01:28:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:28:56.221177 | orchestrator | 2026-01-03 01:28:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:28:56.221222 | orchestrator | 2026-01-03 01:28:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:28:59.275846 | orchestrator | 2026-01-03 01:28:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:28:59.278050 | orchestrator | 2026-01-03 01:28:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:28:59.278119 | orchestrator | 2026-01-03 01:28:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:02.324402 | orchestrator | 2026-01-03 01:29:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:29:02.326387 | orchestrator | 2026-01-03 01:29:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:29:02.326451 | orchestrator | 2026-01-03 01:29:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:05.381140 | orchestrator | 2026-01-03 01:29:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:29:05.383616 | orchestrator | 2026-01-03 01:29:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:29:05.383775 | orchestrator | 2026-01-03 01:29:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:08.435153 | orchestrator | 2026-01-03 01:29:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:29:08.437690 | orchestrator | 2026-01-03 01:29:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:29:08.437789 | orchestrator | 2026-01-03 01:29:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:11.486301 | orchestrator | 2026-01-03 01:29:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:29:11.488602 | orchestrator | 2026-01-03 01:29:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:29:11.488671 | orchestrator | 2026-01-03 01:29:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:14.534415 | orchestrator | 2026-01-03 01:29:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:29:14.537163 | orchestrator | 2026-01-03 01:29:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:29:14.537265 | orchestrator | 2026-01-03 01:29:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:17.580924 | orchestrator | 2026-01-03 01:29:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:29:17.582607 | orchestrator | 2026-01-03 01:29:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:29:17.582690 | orchestrator | 2026-01-03 01:29:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:20.628502 | orchestrator | 2026-01-03 01:29:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:29:20.629701 | orchestrator | 2026-01-03 01:29:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:29:20.629752 | orchestrator | 2026-01-03 01:29:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:23.677131 | orchestrator | 2026-01-03 01:29:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:29:23.679953 | orchestrator | 2026-01-03 01:29:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:29:23.680034 | orchestrator | 2026-01-03 01:29:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:26.728788 | orchestrator | 2026-01-03 01:29:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:29:26.731139 | orchestrator | 2026-01-03 01:29:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:29:26.731360 | orchestrator | 2026-01-03 01:29:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:29.782321 | orchestrator | 2026-01-03 01:29:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:29:29.784515 | orchestrator | 2026-01-03 01:29:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:29:29.784602 | orchestrator | 2026-01-03 01:29:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:32.832315 | orchestrator | 2026-01-03 01:29:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:29:32.834104 | orchestrator | 2026-01-03 01:29:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:29:32.834141 | orchestrator | 2026-01-03 01:29:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:35.880896 | orchestrator | 2026-01-03 01:29:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:29:35.882821 | orchestrator | 2026-01-03 01:29:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:29:35.882865 | orchestrator | 2026-01-03 01:29:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:38.930444 | orchestrator | 2026-01-03 01:29:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:29:38.932557 | orchestrator | 2026-01-03 01:29:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:29:38.932609 | orchestrator | 2026-01-03 01:29:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:41.974735 | orchestrator | 2026-01-03 01:29:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:29:41.976702 | orchestrator | 2026-01-03 01:29:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:29:41.976766 | orchestrator | 2026-01-03 01:29:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:45.033309 | orchestrator | 2026-01-03 01:29:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:29:45.034493 | orchestrator | 2026-01-03 01:29:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:29:45.034533 | orchestrator | 2026-01-03 01:29:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:48.080134 | orchestrator | 2026-01-03 01:29:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:29:48.081330 | orchestrator | 2026-01-03 01:29:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:29:48.081394 | orchestrator | 2026-01-03 01:29:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:51.118086 | orchestrator | 2026-01-03 01:29:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:29:51.119565 | orchestrator | 2026-01-03 01:29:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:29:51.119619 | orchestrator | 2026-01-03 01:29:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:54.163760 | orchestrator | 2026-01-03 01:29:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:29:54.165247 | orchestrator | 2026-01-03 01:29:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:29:54.165284 | orchestrator | 2026-01-03 01:29:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:29:57.210690 | orchestrator | 2026-01-03 01:29:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:29:57.211817 | orchestrator | 2026-01-03 01:29:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:29:57.211859 | orchestrator | 2026-01-03 01:29:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:00.255600 | orchestrator | 2026-01-03 01:30:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:30:00.256197 | orchestrator | 2026-01-03 01:30:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:30:00.256304 | orchestrator | 2026-01-03 01:30:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:03.301292 | orchestrator | 2026-01-03 01:30:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:30:03.302233 | orchestrator | 2026-01-03 01:30:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:30:03.302288 | orchestrator | 2026-01-03 01:30:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:06.354830 | orchestrator | 2026-01-03 01:30:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:30:06.357791 | orchestrator | 2026-01-03 01:30:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:30:06.358095 | orchestrator | 2026-01-03 01:30:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:09.406147 | orchestrator | 2026-01-03 01:30:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:30:09.406302 | orchestrator | 2026-01-03 01:30:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:30:09.406490 | orchestrator | 2026-01-03 01:30:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:12.456124 | orchestrator | 2026-01-03 01:30:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:30:12.458067 | orchestrator | 2026-01-03 01:30:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:30:12.458244 | orchestrator | 2026-01-03 01:30:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:15.504413 | orchestrator | 2026-01-03 01:30:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:30:15.507243 | orchestrator | 2026-01-03 01:30:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:30:15.507311 | orchestrator | 2026-01-03 01:30:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:18.549331 | orchestrator | 2026-01-03 01:30:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:30:18.551405 | orchestrator | 2026-01-03 01:30:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:30:18.551524 | orchestrator | 2026-01-03 01:30:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:21.600565 | orchestrator | 2026-01-03 01:30:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:30:21.601514 | orchestrator | 2026-01-03 01:30:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:30:21.601636 | orchestrator | 2026-01-03 01:30:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:24.652917 | orchestrator | 2026-01-03 01:30:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:30:24.653112 | orchestrator | 2026-01-03 01:30:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:30:24.653127 | orchestrator | 2026-01-03 01:30:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:27.701051 | orchestrator | 2026-01-03 01:30:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:30:27.703459 | orchestrator | 2026-01-03 01:30:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:30:27.703523 | orchestrator | 2026-01-03 01:30:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:30.752987 | orchestrator | 2026-01-03 01:30:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:30:30.754906 | orchestrator | 2026-01-03 01:30:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:30:30.754969 | orchestrator | 2026-01-03 01:30:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:33.808836 | orchestrator | 2026-01-03 01:30:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:30:33.811082 | orchestrator | 2026-01-03 01:30:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:30:33.811276 | orchestrator | 2026-01-03 01:30:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:36.859437 | orchestrator | 2026-01-03 01:30:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:30:36.860628 | orchestrator | 2026-01-03 01:30:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:30:36.860660 | orchestrator | 2026-01-03 01:30:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:39.904353 | orchestrator | 2026-01-03 01:30:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:30:39.904951 | orchestrator | 2026-01-03 01:30:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:30:39.904988 | orchestrator | 2026-01-03 01:30:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:42.955748 | orchestrator | 2026-01-03 01:30:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:30:42.956482 | orchestrator | 2026-01-03 01:30:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:30:42.956517 | orchestrator | 2026-01-03 01:30:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:46.011411 | orchestrator | 2026-01-03 01:30:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:30:46.011918 | orchestrator | 2026-01-03 01:30:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:30:46.011934 | orchestrator | 2026-01-03 01:30:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:49.069089 | orchestrator | 2026-01-03 01:30:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:30:49.069161 | orchestrator | 2026-01-03 01:30:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:30:49.069168 | orchestrator | 2026-01-03 01:30:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:52.114345 | orchestrator | 2026-01-03 01:30:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:30:52.115490 | orchestrator | 2026-01-03 01:30:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:30:52.115589 | orchestrator | 2026-01-03 01:30:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:55.162129 | orchestrator | 2026-01-03 01:30:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:30:55.163515 | orchestrator | 2026-01-03 01:30:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:30:55.163599 | orchestrator | 2026-01-03 01:30:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:30:58.212653 | orchestrator | 2026-01-03 01:30:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:30:58.213626 | orchestrator | 2026-01-03 01:30:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:30:58.213799 | orchestrator | 2026-01-03 01:30:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:01.253809 | orchestrator | 2026-01-03 01:31:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:31:01.254754 | orchestrator | 2026-01-03 01:31:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:31:01.255273 | orchestrator | 2026-01-03 01:31:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:04.302178 | orchestrator | 2026-01-03 01:31:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:31:04.303080 | orchestrator | 2026-01-03 01:31:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:31:04.303445 | orchestrator | 2026-01-03 01:31:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:07.351604 | orchestrator | 2026-01-03 01:31:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:31:07.353425 | orchestrator | 2026-01-03 01:31:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:31:07.353458 | orchestrator | 2026-01-03 01:31:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:10.400689 | orchestrator | 2026-01-03 01:31:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:31:10.402798 | orchestrator | 2026-01-03 01:31:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:31:10.402877 | orchestrator | 2026-01-03 01:31:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:13.450974 | orchestrator | 2026-01-03 01:31:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:31:13.452776 | orchestrator | 2026-01-03 01:31:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:31:13.452837 | orchestrator | 2026-01-03 01:31:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:16.507706 | orchestrator | 2026-01-03 01:31:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:31:16.509586 | orchestrator | 2026-01-03 01:31:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:31:16.509679 | orchestrator | 2026-01-03 01:31:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:19.557614 | orchestrator | 2026-01-03 01:31:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:31:19.558975 | orchestrator | 2026-01-03 01:31:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:31:19.559029 | orchestrator | 2026-01-03 01:31:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:22.603063 | orchestrator | 2026-01-03 01:31:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:31:22.604046 | orchestrator | 2026-01-03 01:31:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:31:22.604180 | orchestrator | 2026-01-03 01:31:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:25.648310 | orchestrator | 2026-01-03 01:31:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:31:25.649830 | orchestrator | 2026-01-03 01:31:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:31:25.649953 | orchestrator | 2026-01-03 01:31:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:28.699525 | orchestrator | 2026-01-03 01:31:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:31:28.701095 | orchestrator | 2026-01-03 01:31:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:31:28.701150 | orchestrator | 2026-01-03 01:31:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:31.749300 | orchestrator | 2026-01-03 01:31:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:31:31.753950 | orchestrator | 2026-01-03 01:31:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:31:31.754085 | orchestrator | 2026-01-03 01:31:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:34.795526 | orchestrator | 2026-01-03 01:31:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:31:34.801431 | orchestrator | 2026-01-03 01:31:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:31:34.801504 | orchestrator | 2026-01-03 01:31:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:37.856028 | orchestrator | 2026-01-03 01:31:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:31:37.857725 | orchestrator | 2026-01-03 01:31:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:31:37.857792 | orchestrator | 2026-01-03 01:31:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:40.905702 | orchestrator | 2026-01-03 01:31:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:31:40.909252 | orchestrator | 2026-01-03 01:31:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:31:40.909328 | orchestrator | 2026-01-03 01:31:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:43.960773 | orchestrator | 2026-01-03 01:31:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:31:43.962763 | orchestrator | 2026-01-03 01:31:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:31:43.962848 | orchestrator | 2026-01-03 01:31:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:47.023842 | orchestrator | 2026-01-03 01:31:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:31:47.025781 | orchestrator | 2026-01-03 01:31:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:31:47.025842 | orchestrator | 2026-01-03 01:31:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:50.075727 | orchestrator | 2026-01-03 01:31:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:31:50.076955 | orchestrator | 2026-01-03 01:31:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:31:50.077052 | orchestrator | 2026-01-03 01:31:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:53.127163 | orchestrator | 2026-01-03 01:31:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:31:53.128734 | orchestrator | 2026-01-03 01:31:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:31:53.128814 | orchestrator | 2026-01-03 01:31:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:56.176889 | orchestrator | 2026-01-03 01:31:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:31:56.177922 | orchestrator | 2026-01-03 01:31:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:31:56.177963 | orchestrator | 2026-01-03 01:31:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:31:59.223569 | orchestrator | 2026-01-03 01:31:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:31:59.224129 | orchestrator | 2026-01-03 01:31:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:31:59.224182 | orchestrator | 2026-01-03 01:31:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:02.272930 | orchestrator | 2026-01-03 01:32:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:32:02.274335 | orchestrator | 2026-01-03 01:32:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:32:02.274389 | orchestrator | 2026-01-03 01:32:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:05.319428 | orchestrator | 2026-01-03 01:32:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:32:05.320988 | orchestrator | 2026-01-03 01:32:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:32:05.321057 | orchestrator | 2026-01-03 01:32:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:08.366863 | orchestrator | 2026-01-03 01:32:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:32:08.368819 | orchestrator | 2026-01-03 01:32:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:32:08.368864 | orchestrator | 2026-01-03 01:32:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:11.417391 | orchestrator | 2026-01-03 01:32:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:32:11.419485 | orchestrator | 2026-01-03 01:32:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:32:11.419560 | orchestrator | 2026-01-03 01:32:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:14.469003 | orchestrator | 2026-01-03 01:32:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:32:14.470370 | orchestrator | 2026-01-03 01:32:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:32:14.470486 | orchestrator | 2026-01-03 01:32:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:17.521321 | orchestrator | 2026-01-03 01:32:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:32:17.521510 | orchestrator | 2026-01-03 01:32:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:32:17.521528 | orchestrator | 2026-01-03 01:32:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:20.566642 | orchestrator | 2026-01-03 01:32:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:32:20.568492 | orchestrator | 2026-01-03 01:32:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:32:20.568585 | orchestrator | 2026-01-03 01:32:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:23.622246 | orchestrator | 2026-01-03 01:32:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:32:23.624909 | orchestrator | 2026-01-03 01:32:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:32:23.624976 | orchestrator | 2026-01-03 01:32:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:26.674673 | orchestrator | 2026-01-03 01:32:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:32:26.676633 | orchestrator | 2026-01-03 01:32:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:32:26.676757 | orchestrator | 2026-01-03 01:32:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:29.725763 | orchestrator | 2026-01-03 01:32:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:32:29.727666 | orchestrator | 2026-01-03 01:32:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:32:29.727725 | orchestrator | 2026-01-03 01:32:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:32.772883 | orchestrator | 2026-01-03 01:32:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:32:32.775610 | orchestrator | 2026-01-03 01:32:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:32:32.775688 | orchestrator | 2026-01-03 01:32:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:35.821579 | orchestrator | 2026-01-03 01:32:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:32:35.823193 | orchestrator | 2026-01-03 01:32:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:32:35.823288 | orchestrator | 2026-01-03 01:32:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:38.868171 | orchestrator | 2026-01-03 01:32:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:32:38.869753 | orchestrator | 2026-01-03 01:32:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:32:38.869869 | orchestrator | 2026-01-03 01:32:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:41.912549 | orchestrator | 2026-01-03 01:32:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:32:41.914377 | orchestrator | 2026-01-03 01:32:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:32:41.914439 | orchestrator | 2026-01-03 01:32:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:44.962705 | orchestrator | 2026-01-03 01:32:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:32:44.966369 | orchestrator | 2026-01-03 01:32:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:32:44.966439 | orchestrator | 2026-01-03 01:32:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:48.022415 | orchestrator | 2026-01-03 01:32:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:32:48.024929 | orchestrator | 2026-01-03 01:32:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:32:48.025001 | orchestrator | 2026-01-03 01:32:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:51.068276 | orchestrator | 2026-01-03 01:32:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:32:51.068647 | orchestrator | 2026-01-03 01:32:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:32:51.068668 | orchestrator | 2026-01-03 01:32:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:54.116525 | orchestrator | 2026-01-03 01:32:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:32:54.119076 | orchestrator | 2026-01-03 01:32:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:32:54.119148 | orchestrator | 2026-01-03 01:32:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:32:57.166305 | orchestrator | 2026-01-03 01:32:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:32:57.167973 | orchestrator | 2026-01-03 01:32:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:32:57.168082 | orchestrator | 2026-01-03 01:32:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:00.215698 | orchestrator | 2026-01-03 01:33:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:33:00.217397 | orchestrator | 2026-01-03 01:33:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:33:00.217494 | orchestrator | 2026-01-03 01:33:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:03.264312 | orchestrator | 2026-01-03 01:33:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:33:03.266723 | orchestrator | 2026-01-03 01:33:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:33:03.266802 | orchestrator | 2026-01-03 01:33:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:06.311814 | orchestrator | 2026-01-03 01:33:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:33:06.314129 | orchestrator | 2026-01-03 01:33:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:33:06.314358 | orchestrator | 2026-01-03 01:33:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:09.363525 | orchestrator | 2026-01-03 01:33:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:33:09.365039 | orchestrator | 2026-01-03 01:33:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:33:09.365084 | orchestrator | 2026-01-03 01:33:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:12.415616 | orchestrator | 2026-01-03 01:33:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:33:12.417034 | orchestrator | 2026-01-03 01:33:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:33:12.417083 | orchestrator | 2026-01-03 01:33:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:15.466288 | orchestrator | 2026-01-03 01:33:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:33:15.468512 | orchestrator | 2026-01-03 01:33:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:33:15.468570 | orchestrator | 2026-01-03 01:33:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:18.512033 | orchestrator | 2026-01-03 01:33:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:33:18.513529 | orchestrator | 2026-01-03 01:33:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:33:18.513584 | orchestrator | 2026-01-03 01:33:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:21.559636 | orchestrator | 2026-01-03 01:33:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:33:21.562492 | orchestrator | 2026-01-03 01:33:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:33:21.562580 | orchestrator | 2026-01-03 01:33:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:24.611129 | orchestrator | 2026-01-03 01:33:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:33:24.613262 | orchestrator | 2026-01-03 01:33:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:33:24.613325 | orchestrator | 2026-01-03 01:33:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:27.659620 | orchestrator | 2026-01-03 01:33:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:33:27.661027 | orchestrator | 2026-01-03 01:33:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:33:27.661731 | orchestrator | 2026-01-03 01:33:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:30.705419 | orchestrator | 2026-01-03 01:33:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:33:30.708615 | orchestrator | 2026-01-03 01:33:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:33:30.708734 | orchestrator | 2026-01-03 01:33:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:33.755031 | orchestrator | 2026-01-03 01:33:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:33:33.757278 | orchestrator | 2026-01-03 01:33:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:33:33.757341 | orchestrator | 2026-01-03 01:33:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:36.802926 | orchestrator | 2026-01-03 01:33:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:33:36.805573 | orchestrator | 2026-01-03 01:33:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:33:36.805700 | orchestrator | 2026-01-03 01:33:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:39.854699 | orchestrator | 2026-01-03 01:33:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:33:39.857145 | orchestrator | 2026-01-03 01:33:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:33:39.857259 | orchestrator | 2026-01-03 01:33:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:42.907862 | orchestrator | 2026-01-03 01:33:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:33:42.909256 | orchestrator | 2026-01-03 01:33:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:33:42.909317 | orchestrator | 2026-01-03 01:33:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:45.961416 | orchestrator | 2026-01-03 01:33:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:33:45.962937 | orchestrator | 2026-01-03 01:33:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:33:45.962981 | orchestrator | 2026-01-03 01:33:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:49.012260 | orchestrator | 2026-01-03 01:33:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:33:49.014830 | orchestrator | 2026-01-03 01:33:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:33:49.014916 | orchestrator | 2026-01-03 01:33:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:52.064728 | orchestrator | 2026-01-03 01:33:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:33:52.069449 | orchestrator | 2026-01-03 01:33:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:33:52.069536 | orchestrator | 2026-01-03 01:33:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:55.111383 | orchestrator | 2026-01-03 01:33:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:33:55.113952 | orchestrator | 2026-01-03 01:33:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:33:55.114081 | orchestrator | 2026-01-03 01:33:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:33:58.163854 | orchestrator | 2026-01-03 01:33:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:33:58.166707 | orchestrator | 2026-01-03 01:33:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:33:58.166779 | orchestrator | 2026-01-03 01:33:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:01.219608 | orchestrator | 2026-01-03 01:34:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:34:01.221882 | orchestrator | 2026-01-03 01:34:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:34:01.221945 | orchestrator | 2026-01-03 01:34:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:04.267004 | orchestrator | 2026-01-03 01:34:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:34:04.269515 | orchestrator | 2026-01-03 01:34:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:34:04.269588 | orchestrator | 2026-01-03 01:34:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:07.318580 | orchestrator | 2026-01-03 01:34:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:34:07.320534 | orchestrator | 2026-01-03 01:34:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:34:07.320616 | orchestrator | 2026-01-03 01:34:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:10.367833 | orchestrator | 2026-01-03 01:34:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:34:10.370105 | orchestrator | 2026-01-03 01:34:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:34:10.370158 | orchestrator | 2026-01-03 01:34:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:13.419131 | orchestrator | 2026-01-03 01:34:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:34:13.420530 | orchestrator | 2026-01-03 01:34:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:34:13.420587 | orchestrator | 2026-01-03 01:34:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:16.462636 | orchestrator | 2026-01-03 01:34:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:34:16.464302 | orchestrator | 2026-01-03 01:34:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:34:16.464361 | orchestrator | 2026-01-03 01:34:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:19.513071 | orchestrator | 2026-01-03 01:34:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:34:19.514472 | orchestrator | 2026-01-03 01:34:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:34:19.514522 | orchestrator | 2026-01-03 01:34:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:22.557426 | orchestrator | 2026-01-03 01:34:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:34:22.558935 | orchestrator | 2026-01-03 01:34:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:34:22.558989 | orchestrator | 2026-01-03 01:34:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:25.609934 | orchestrator | 2026-01-03 01:34:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:34:25.611799 | orchestrator | 2026-01-03 01:34:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:34:25.611851 | orchestrator | 2026-01-03 01:34:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:28.661607 | orchestrator | 2026-01-03 01:34:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:34:28.663427 | orchestrator | 2026-01-03 01:34:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:34:28.663477 | orchestrator | 2026-01-03 01:34:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:31.715757 | orchestrator | 2026-01-03 01:34:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:34:31.717973 | orchestrator | 2026-01-03 01:34:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:34:31.718118 | orchestrator | 2026-01-03 01:34:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:34.765580 | orchestrator | 2026-01-03 01:34:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:34:34.767920 | orchestrator | 2026-01-03 01:34:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:34:34.767996 | orchestrator | 2026-01-03 01:34:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:37.817001 | orchestrator | 2026-01-03 01:34:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:34:37.819656 | orchestrator | 2026-01-03 01:34:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:34:37.819726 | orchestrator | 2026-01-03 01:34:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:40.864689 | orchestrator | 2026-01-03 01:34:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:34:40.865623 | orchestrator | 2026-01-03 01:34:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:34:40.865670 | orchestrator | 2026-01-03 01:34:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:43.910271 | orchestrator | 2026-01-03 01:34:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:34:43.913637 | orchestrator | 2026-01-03 01:34:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:34:43.913705 | orchestrator | 2026-01-03 01:34:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:46.961279 | orchestrator | 2026-01-03 01:34:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:34:46.962773 | orchestrator | 2026-01-03 01:34:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:34:46.962819 | orchestrator | 2026-01-03 01:34:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:50.012431 | orchestrator | 2026-01-03 01:34:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:34:50.014257 | orchestrator | 2026-01-03 01:34:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:34:50.014335 | orchestrator | 2026-01-03 01:34:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:53.062516 | orchestrator | 2026-01-03 01:34:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:34:53.064023 | orchestrator | 2026-01-03 01:34:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:34:53.064065 | orchestrator | 2026-01-03 01:34:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:56.099910 | orchestrator | 2026-01-03 01:34:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:34:56.101354 | orchestrator | 2026-01-03 01:34:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:34:56.101401 | orchestrator | 2026-01-03 01:34:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:34:59.148972 | orchestrator | 2026-01-03 01:34:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:34:59.150401 | orchestrator | 2026-01-03 01:34:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:34:59.150466 | orchestrator | 2026-01-03 01:34:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:02.198476 | orchestrator | 2026-01-03 01:35:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:35:02.199920 | orchestrator | 2026-01-03 01:35:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:35:02.199981 | orchestrator | 2026-01-03 01:35:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:05.244172 | orchestrator | 2026-01-03 01:35:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:35:05.245482 | orchestrator | 2026-01-03 01:35:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:35:05.245539 | orchestrator | 2026-01-03 01:35:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:08.295278 | orchestrator | 2026-01-03 01:35:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:35:08.297603 | orchestrator | 2026-01-03 01:35:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:35:08.297702 | orchestrator | 2026-01-03 01:35:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:11.340610 | orchestrator | 2026-01-03 01:35:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:35:11.342608 | orchestrator | 2026-01-03 01:35:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:35:11.342695 | orchestrator | 2026-01-03 01:35:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:14.386297 | orchestrator | 2026-01-03 01:35:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:35:14.387792 | orchestrator | 2026-01-03 01:35:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:35:14.387858 | orchestrator | 2026-01-03 01:35:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:17.435908 | orchestrator | 2026-01-03 01:35:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:35:17.438220 | orchestrator | 2026-01-03 01:35:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:35:17.438291 | orchestrator | 2026-01-03 01:35:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:20.480691 | orchestrator | 2026-01-03 01:35:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:35:20.481807 | orchestrator | 2026-01-03 01:35:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:35:20.481899 | orchestrator | 2026-01-03 01:35:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:23.529176 | orchestrator | 2026-01-03 01:35:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:35:23.532013 | orchestrator | 2026-01-03 01:35:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:35:23.532076 | orchestrator | 2026-01-03 01:35:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:26.572315 | orchestrator | 2026-01-03 01:35:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:35:26.573903 | orchestrator | 2026-01-03 01:35:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:35:26.573961 | orchestrator | 2026-01-03 01:35:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:29.619585 | orchestrator | 2026-01-03 01:35:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:35:29.621920 | orchestrator | 2026-01-03 01:35:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:35:29.621975 | orchestrator | 2026-01-03 01:35:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:32.670578 | orchestrator | 2026-01-03 01:35:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:35:32.672167 | orchestrator | 2026-01-03 01:35:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:35:32.672253 | orchestrator | 2026-01-03 01:35:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:35.723832 | orchestrator | 2026-01-03 01:35:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:35:35.724678 | orchestrator | 2026-01-03 01:35:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:35:35.724732 | orchestrator | 2026-01-03 01:35:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:38.769248 | orchestrator | 2026-01-03 01:35:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:35:38.770139 | orchestrator | 2026-01-03 01:35:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:35:38.770285 | orchestrator | 2026-01-03 01:35:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:41.818237 | orchestrator | 2026-01-03 01:35:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:35:41.821456 | orchestrator | 2026-01-03 01:35:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:35:41.821527 | orchestrator | 2026-01-03 01:35:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:44.872301 | orchestrator | 2026-01-03 01:35:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:35:44.873966 | orchestrator | 2026-01-03 01:35:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:35:44.874054 | orchestrator | 2026-01-03 01:35:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:47.919132 | orchestrator | 2026-01-03 01:35:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:35:47.920143 | orchestrator | 2026-01-03 01:35:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:35:47.920211 | orchestrator | 2026-01-03 01:35:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:50.968613 | orchestrator | 2026-01-03 01:35:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:35:50.970442 | orchestrator | 2026-01-03 01:35:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:35:50.970627 | orchestrator | 2026-01-03 01:35:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:54.014994 | orchestrator | 2026-01-03 01:35:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:35:54.015917 | orchestrator | 2026-01-03 01:35:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:35:54.016646 | orchestrator | 2026-01-03 01:35:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:35:57.057435 | orchestrator | 2026-01-03 01:35:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:35:57.058983 | orchestrator | 2026-01-03 01:35:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:35:57.059037 | orchestrator | 2026-01-03 01:35:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:00.103531 | orchestrator | 2026-01-03 01:36:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:36:00.105728 | orchestrator | 2026-01-03 01:36:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:36:00.105780 | orchestrator | 2026-01-03 01:36:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:03.148993 | orchestrator | 2026-01-03 01:36:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:36:03.150827 | orchestrator | 2026-01-03 01:36:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:36:03.150969 | orchestrator | 2026-01-03 01:36:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:06.195149 | orchestrator | 2026-01-03 01:36:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:36:06.197242 | orchestrator | 2026-01-03 01:36:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:36:06.197324 | orchestrator | 2026-01-03 01:36:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:09.242051 | orchestrator | 2026-01-03 01:36:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:36:09.243537 | orchestrator | 2026-01-03 01:36:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:36:09.243689 | orchestrator | 2026-01-03 01:36:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:12.291812 | orchestrator | 2026-01-03 01:36:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:36:12.294742 | orchestrator | 2026-01-03 01:36:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:36:12.294805 | orchestrator | 2026-01-03 01:36:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:15.340358 | orchestrator | 2026-01-03 01:36:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:36:15.341834 | orchestrator | 2026-01-03 01:36:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:36:15.342563 | orchestrator | 2026-01-03 01:36:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:18.385842 | orchestrator | 2026-01-03 01:36:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:36:18.388073 | orchestrator | 2026-01-03 01:36:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:36:18.388109 | orchestrator | 2026-01-03 01:36:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:21.437197 | orchestrator | 2026-01-03 01:36:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:36:21.437748 | orchestrator | 2026-01-03 01:36:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:36:21.437830 | orchestrator | 2026-01-03 01:36:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:24.486205 | orchestrator | 2026-01-03 01:36:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:36:24.488305 | orchestrator | 2026-01-03 01:36:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:36:24.488514 | orchestrator | 2026-01-03 01:36:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:27.534871 | orchestrator | 2026-01-03 01:36:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:36:27.536452 | orchestrator | 2026-01-03 01:36:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:36:27.536596 | orchestrator | 2026-01-03 01:36:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:30.582006 | orchestrator | 2026-01-03 01:36:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:36:30.583589 | orchestrator | 2026-01-03 01:36:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:36:30.583780 | orchestrator | 2026-01-03 01:36:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:33.632811 | orchestrator | 2026-01-03 01:36:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:36:33.633826 | orchestrator | 2026-01-03 01:36:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:36:33.634075 | orchestrator | 2026-01-03 01:36:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:36.686378 | orchestrator | 2026-01-03 01:36:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:36:36.688839 | orchestrator | 2026-01-03 01:36:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:36:36.688863 | orchestrator | 2026-01-03 01:36:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:39.732820 | orchestrator | 2026-01-03 01:36:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:36:39.734235 | orchestrator | 2026-01-03 01:36:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:36:39.734305 | orchestrator | 2026-01-03 01:36:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:42.775693 | orchestrator | 2026-01-03 01:36:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:36:42.777046 | orchestrator | 2026-01-03 01:36:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:36:42.777098 | orchestrator | 2026-01-03 01:36:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:45.824240 | orchestrator | 2026-01-03 01:36:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:36:45.825804 | orchestrator | 2026-01-03 01:36:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:36:45.825922 | orchestrator | 2026-01-03 01:36:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:48.866181 | orchestrator | 2026-01-03 01:36:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:36:48.869250 | orchestrator | 2026-01-03 01:36:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:36:48.869386 | orchestrator | 2026-01-03 01:36:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:51.910856 | orchestrator | 2026-01-03 01:36:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:36:51.911242 | orchestrator | 2026-01-03 01:36:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:36:51.911256 | orchestrator | 2026-01-03 01:36:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:54.958707 | orchestrator | 2026-01-03 01:36:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:36:54.960436 | orchestrator | 2026-01-03 01:36:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:36:54.960516 | orchestrator | 2026-01-03 01:36:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:36:58.011039 | orchestrator | 2026-01-03 01:36:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:36:58.014755 | orchestrator | 2026-01-03 01:36:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:36:58.014831 | orchestrator | 2026-01-03 01:36:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:01.063239 | orchestrator | 2026-01-03 01:37:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:37:01.064850 | orchestrator | 2026-01-03 01:37:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:37:01.064900 | orchestrator | 2026-01-03 01:37:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:04.110553 | orchestrator | 2026-01-03 01:37:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:37:04.111593 | orchestrator | 2026-01-03 01:37:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:37:04.111624 | orchestrator | 2026-01-03 01:37:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:07.162229 | orchestrator | 2026-01-03 01:37:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:37:07.163907 | orchestrator | 2026-01-03 01:37:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:37:07.163952 | orchestrator | 2026-01-03 01:37:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:10.203737 | orchestrator | 2026-01-03 01:37:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:37:10.204840 | orchestrator | 2026-01-03 01:37:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:37:10.204887 | orchestrator | 2026-01-03 01:37:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:13.256268 | orchestrator | 2026-01-03 01:37:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:37:13.258604 | orchestrator | 2026-01-03 01:37:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:37:13.259007 | orchestrator | 2026-01-03 01:37:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:16.303487 | orchestrator | 2026-01-03 01:37:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:37:16.305625 | orchestrator | 2026-01-03 01:37:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:37:16.305706 | orchestrator | 2026-01-03 01:37:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:19.352984 | orchestrator | 2026-01-03 01:37:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:37:19.354761 | orchestrator | 2026-01-03 01:37:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:37:19.354816 | orchestrator | 2026-01-03 01:37:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:22.406376 | orchestrator | 2026-01-03 01:37:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:37:22.407811 | orchestrator | 2026-01-03 01:37:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:37:22.407885 | orchestrator | 2026-01-03 01:37:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:25.452294 | orchestrator | 2026-01-03 01:37:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:37:25.453545 | orchestrator | 2026-01-03 01:37:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:37:25.453624 | orchestrator | 2026-01-03 01:37:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:28.502874 | orchestrator | 2026-01-03 01:37:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:37:28.505504 | orchestrator | 2026-01-03 01:37:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:37:28.505587 | orchestrator | 2026-01-03 01:37:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:31.550540 | orchestrator | 2026-01-03 01:37:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:37:31.552989 | orchestrator | 2026-01-03 01:37:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:37:31.553112 | orchestrator | 2026-01-03 01:37:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:34.593362 | orchestrator | 2026-01-03 01:37:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:37:34.597054 | orchestrator | 2026-01-03 01:37:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:37:34.597202 | orchestrator | 2026-01-03 01:37:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:37.646352 | orchestrator | 2026-01-03 01:37:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:37:37.646825 | orchestrator | 2026-01-03 01:37:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:37:37.646859 | orchestrator | 2026-01-03 01:37:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:40.696346 | orchestrator | 2026-01-03 01:37:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:37:40.699835 | orchestrator | 2026-01-03 01:37:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:37:40.700076 | orchestrator | 2026-01-03 01:37:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:43.747390 | orchestrator | 2026-01-03 01:37:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:37:43.749320 | orchestrator | 2026-01-03 01:37:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:37:43.749439 | orchestrator | 2026-01-03 01:37:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:46.793865 | orchestrator | 2026-01-03 01:37:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:37:46.795417 | orchestrator | 2026-01-03 01:37:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:37:46.795798 | orchestrator | 2026-01-03 01:37:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:49.844089 | orchestrator | 2026-01-03 01:37:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:37:49.845936 | orchestrator | 2026-01-03 01:37:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:37:49.845992 | orchestrator | 2026-01-03 01:37:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:52.890403 | orchestrator | 2026-01-03 01:37:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:37:52.891033 | orchestrator | 2026-01-03 01:37:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:37:52.891247 | orchestrator | 2026-01-03 01:37:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:55.938887 | orchestrator | 2026-01-03 01:37:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:37:55.939782 | orchestrator | 2026-01-03 01:37:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:37:55.939852 | orchestrator | 2026-01-03 01:37:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:37:58.986056 | orchestrator | 2026-01-03 01:37:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:37:58.988327 | orchestrator | 2026-01-03 01:37:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:37:58.988373 | orchestrator | 2026-01-03 01:37:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:02.035175 | orchestrator | 2026-01-03 01:38:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:38:02.037575 | orchestrator | 2026-01-03 01:38:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:38:02.037633 | orchestrator | 2026-01-03 01:38:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:05.082343 | orchestrator | 2026-01-03 01:38:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:38:05.084275 | orchestrator | 2026-01-03 01:38:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:38:05.084489 | orchestrator | 2026-01-03 01:38:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:08.133044 | orchestrator | 2026-01-03 01:38:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:38:08.134898 | orchestrator | 2026-01-03 01:38:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:38:08.134942 | orchestrator | 2026-01-03 01:38:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:11.177806 | orchestrator | 2026-01-03 01:38:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:38:11.179558 | orchestrator | 2026-01-03 01:38:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:38:11.179623 | orchestrator | 2026-01-03 01:38:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:14.223762 | orchestrator | 2026-01-03 01:38:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:38:14.226079 | orchestrator | 2026-01-03 01:38:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:38:14.226249 | orchestrator | 2026-01-03 01:38:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:17.276801 | orchestrator | 2026-01-03 01:38:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:38:17.277547 | orchestrator | 2026-01-03 01:38:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:38:17.277577 | orchestrator | 2026-01-03 01:38:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:20.321724 | orchestrator | 2026-01-03 01:38:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:38:20.323222 | orchestrator | 2026-01-03 01:38:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:38:20.323274 | orchestrator | 2026-01-03 01:38:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:23.377224 | orchestrator | 2026-01-03 01:38:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:38:23.378621 | orchestrator | 2026-01-03 01:38:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:38:23.378749 | orchestrator | 2026-01-03 01:38:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:26.420804 | orchestrator | 2026-01-03 01:38:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:38:26.422689 | orchestrator | 2026-01-03 01:38:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:38:26.422732 | orchestrator | 2026-01-03 01:38:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:29.473764 | orchestrator | 2026-01-03 01:38:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:38:29.475018 | orchestrator | 2026-01-03 01:38:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:38:29.475188 | orchestrator | 2026-01-03 01:38:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:32.520288 | orchestrator | 2026-01-03 01:38:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:38:32.522199 | orchestrator | 2026-01-03 01:38:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:38:32.522259 | orchestrator | 2026-01-03 01:38:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:35.565383 | orchestrator | 2026-01-03 01:38:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:38:35.567319 | orchestrator | 2026-01-03 01:38:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:38:35.567462 | orchestrator | 2026-01-03 01:38:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:38.615878 | orchestrator | 2026-01-03 01:38:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:38:38.617055 | orchestrator | 2026-01-03 01:38:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:38:38.617074 | orchestrator | 2026-01-03 01:38:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:41.663842 | orchestrator | 2026-01-03 01:38:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:38:41.666406 | orchestrator | 2026-01-03 01:38:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:38:41.666502 | orchestrator | 2026-01-03 01:38:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:44.708265 | orchestrator | 2026-01-03 01:38:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:38:44.710242 | orchestrator | 2026-01-03 01:38:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:38:44.710309 | orchestrator | 2026-01-03 01:38:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:47.756893 | orchestrator | 2026-01-03 01:38:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:38:47.758863 | orchestrator | 2026-01-03 01:38:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:38:47.758903 | orchestrator | 2026-01-03 01:38:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:50.804765 | orchestrator | 2026-01-03 01:38:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:38:50.804873 | orchestrator | 2026-01-03 01:38:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:38:50.804924 | orchestrator | 2026-01-03 01:38:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:53.860861 | orchestrator | 2026-01-03 01:38:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:38:53.863231 | orchestrator | 2026-01-03 01:38:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:38:53.863296 | orchestrator | 2026-01-03 01:38:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:56.905487 | orchestrator | 2026-01-03 01:38:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:38:56.907662 | orchestrator | 2026-01-03 01:38:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:38:56.907851 | orchestrator | 2026-01-03 01:38:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:38:59.955682 | orchestrator | 2026-01-03 01:38:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:38:59.956810 | orchestrator | 2026-01-03 01:38:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:38:59.956884 | orchestrator | 2026-01-03 01:38:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:03.004522 | orchestrator | 2026-01-03 01:39:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:39:03.006923 | orchestrator | 2026-01-03 01:39:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:39:03.007053 | orchestrator | 2026-01-03 01:39:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:06.045495 | orchestrator | 2026-01-03 01:39:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:39:06.046243 | orchestrator | 2026-01-03 01:39:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:39:06.046313 | orchestrator | 2026-01-03 01:39:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:09.096799 | orchestrator | 2026-01-03 01:39:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:39:09.098224 | orchestrator | 2026-01-03 01:39:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:39:09.098344 | orchestrator | 2026-01-03 01:39:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:12.149817 | orchestrator | 2026-01-03 01:39:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:39:12.150796 | orchestrator | 2026-01-03 01:39:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:39:12.151306 | orchestrator | 2026-01-03 01:39:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:15.199385 | orchestrator | 2026-01-03 01:39:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:39:15.201478 | orchestrator | 2026-01-03 01:39:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:39:15.201569 | orchestrator | 2026-01-03 01:39:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:18.248287 | orchestrator | 2026-01-03 01:39:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:39:18.249411 | orchestrator | 2026-01-03 01:39:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:39:18.249639 | orchestrator | 2026-01-03 01:39:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:21.294686 | orchestrator | 2026-01-03 01:39:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:39:21.296137 | orchestrator | 2026-01-03 01:39:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:39:21.296186 | orchestrator | 2026-01-03 01:39:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:24.346011 | orchestrator | 2026-01-03 01:39:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:39:24.347480 | orchestrator | 2026-01-03 01:39:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:39:24.347774 | orchestrator | 2026-01-03 01:39:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:27.392796 | orchestrator | 2026-01-03 01:39:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:39:27.394772 | orchestrator | 2026-01-03 01:39:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:39:27.394867 | orchestrator | 2026-01-03 01:39:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:30.447618 | orchestrator | 2026-01-03 01:39:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:39:30.449697 | orchestrator | 2026-01-03 01:39:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:39:30.449764 | orchestrator | 2026-01-03 01:39:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:33.497539 | orchestrator | 2026-01-03 01:39:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:39:33.498555 | orchestrator | 2026-01-03 01:39:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:39:33.498752 | orchestrator | 2026-01-03 01:39:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:36.544944 | orchestrator | 2026-01-03 01:39:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:39:36.546779 | orchestrator | 2026-01-03 01:39:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:39:36.546871 | orchestrator | 2026-01-03 01:39:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:39.604989 | orchestrator | 2026-01-03 01:39:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:39:39.606985 | orchestrator | 2026-01-03 01:39:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:39:39.607164 | orchestrator | 2026-01-03 01:39:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:42.655683 | orchestrator | 2026-01-03 01:39:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:39:42.656666 | orchestrator | 2026-01-03 01:39:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:39:42.656799 | orchestrator | 2026-01-03 01:39:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:45.706980 | orchestrator | 2026-01-03 01:39:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:39:45.708249 | orchestrator | 2026-01-03 01:39:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:39:45.708282 | orchestrator | 2026-01-03 01:39:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:48.752579 | orchestrator | 2026-01-03 01:39:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:39:48.755581 | orchestrator | 2026-01-03 01:39:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:39:48.755690 | orchestrator | 2026-01-03 01:39:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:51.809992 | orchestrator | 2026-01-03 01:39:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:39:51.812379 | orchestrator | 2026-01-03 01:39:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:39:51.812471 | orchestrator | 2026-01-03 01:39:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:54.866053 | orchestrator | 2026-01-03 01:39:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:39:54.867636 | orchestrator | 2026-01-03 01:39:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:39:54.867718 | orchestrator | 2026-01-03 01:39:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:39:57.912700 | orchestrator | 2026-01-03 01:39:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:39:57.914368 | orchestrator | 2026-01-03 01:39:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:39:57.914436 | orchestrator | 2026-01-03 01:39:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:00.962788 | orchestrator | 2026-01-03 01:40:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:40:00.963651 | orchestrator | 2026-01-03 01:40:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:40:00.963686 | orchestrator | 2026-01-03 01:40:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:04.010900 | orchestrator | 2026-01-03 01:40:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:40:04.012787 | orchestrator | 2026-01-03 01:40:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:40:04.012847 | orchestrator | 2026-01-03 01:40:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:07.060928 | orchestrator | 2026-01-03 01:40:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:40:07.061884 | orchestrator | 2026-01-03 01:40:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:40:07.061911 | orchestrator | 2026-01-03 01:40:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:10.101244 | orchestrator | 2026-01-03 01:40:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:40:10.103183 | orchestrator | 2026-01-03 01:40:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:40:10.103423 | orchestrator | 2026-01-03 01:40:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:13.148609 | orchestrator | 2026-01-03 01:40:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:40:13.151338 | orchestrator | 2026-01-03 01:40:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:40:13.151423 | orchestrator | 2026-01-03 01:40:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:16.198628 | orchestrator | 2026-01-03 01:40:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:40:16.200823 | orchestrator | 2026-01-03 01:40:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:40:16.200867 | orchestrator | 2026-01-03 01:40:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:19.250336 | orchestrator | 2026-01-03 01:40:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:40:19.251585 | orchestrator | 2026-01-03 01:40:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:40:19.251796 | orchestrator | 2026-01-03 01:40:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:22.305417 | orchestrator | 2026-01-03 01:40:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:40:22.308315 | orchestrator | 2026-01-03 01:40:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:40:22.308353 | orchestrator | 2026-01-03 01:40:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:25.360648 | orchestrator | 2026-01-03 01:40:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:40:25.362659 | orchestrator | 2026-01-03 01:40:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:40:25.362797 | orchestrator | 2026-01-03 01:40:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:28.413504 | orchestrator | 2026-01-03 01:40:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:40:28.416511 | orchestrator | 2026-01-03 01:40:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:40:28.416593 | orchestrator | 2026-01-03 01:40:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:31.455872 | orchestrator | 2026-01-03 01:40:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:40:31.458407 | orchestrator | 2026-01-03 01:40:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:40:31.458474 | orchestrator | 2026-01-03 01:40:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:34.507401 | orchestrator | 2026-01-03 01:40:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:40:34.507951 | orchestrator | 2026-01-03 01:40:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:40:34.508006 | orchestrator | 2026-01-03 01:40:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:37.563145 | orchestrator | 2026-01-03 01:40:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:40:37.565495 | orchestrator | 2026-01-03 01:40:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:40:37.565569 | orchestrator | 2026-01-03 01:40:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:40.617197 | orchestrator | 2026-01-03 01:40:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:40:40.619325 | orchestrator | 2026-01-03 01:40:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:40:40.619404 | orchestrator | 2026-01-03 01:40:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:43.664856 | orchestrator | 2026-01-03 01:40:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:40:43.667175 | orchestrator | 2026-01-03 01:40:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:40:43.667218 | orchestrator | 2026-01-03 01:40:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:46.716272 | orchestrator | 2026-01-03 01:40:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:40:46.717916 | orchestrator | 2026-01-03 01:40:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:40:46.717971 | orchestrator | 2026-01-03 01:40:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:49.764718 | orchestrator | 2026-01-03 01:40:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:40:49.767799 | orchestrator | 2026-01-03 01:40:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:40:49.767883 | orchestrator | 2026-01-03 01:40:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:52.811197 | orchestrator | 2026-01-03 01:40:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:40:52.813297 | orchestrator | 2026-01-03 01:40:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:40:52.813361 | orchestrator | 2026-01-03 01:40:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:55.864915 | orchestrator | 2026-01-03 01:40:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:40:55.866870 | orchestrator | 2026-01-03 01:40:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:40:55.866925 | orchestrator | 2026-01-03 01:40:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:40:58.915102 | orchestrator | 2026-01-03 01:40:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:40:58.917038 | orchestrator | 2026-01-03 01:40:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:40:58.917118 | orchestrator | 2026-01-03 01:40:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:01.961509 | orchestrator | 2026-01-03 01:41:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:41:01.963804 | orchestrator | 2026-01-03 01:41:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:41:01.963902 | orchestrator | 2026-01-03 01:41:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:05.016247 | orchestrator | 2026-01-03 01:41:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:41:05.018577 | orchestrator | 2026-01-03 01:41:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:41:05.018624 | orchestrator | 2026-01-03 01:41:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:08.063642 | orchestrator | 2026-01-03 01:41:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:41:08.063711 | orchestrator | 2026-01-03 01:41:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:41:08.063718 | orchestrator | 2026-01-03 01:41:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:11.102902 | orchestrator | 2026-01-03 01:41:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:41:11.106176 | orchestrator | 2026-01-03 01:41:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:41:11.106512 | orchestrator | 2026-01-03 01:41:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:14.148991 | orchestrator | 2026-01-03 01:41:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:41:14.150979 | orchestrator | 2026-01-03 01:41:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:41:14.151187 | orchestrator | 2026-01-03 01:41:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:17.202498 | orchestrator | 2026-01-03 01:41:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:41:17.205354 | orchestrator | 2026-01-03 01:41:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:41:17.205423 | orchestrator | 2026-01-03 01:41:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:20.253127 | orchestrator | 2026-01-03 01:41:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:41:20.254697 | orchestrator | 2026-01-03 01:41:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:41:20.254784 | orchestrator | 2026-01-03 01:41:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:23.304940 | orchestrator | 2026-01-03 01:41:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:41:23.307152 | orchestrator | 2026-01-03 01:41:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:41:23.307235 | orchestrator | 2026-01-03 01:41:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:26.353475 | orchestrator | 2026-01-03 01:41:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:41:26.356461 | orchestrator | 2026-01-03 01:41:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:41:26.356516 | orchestrator | 2026-01-03 01:41:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:29.398622 | orchestrator | 2026-01-03 01:41:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:41:29.399281 | orchestrator | 2026-01-03 01:41:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:41:29.399352 | orchestrator | 2026-01-03 01:41:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:32.448140 | orchestrator | 2026-01-03 01:41:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:41:32.449378 | orchestrator | 2026-01-03 01:41:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:41:32.449429 | orchestrator | 2026-01-03 01:41:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:35.495525 | orchestrator | 2026-01-03 01:41:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:41:35.497814 | orchestrator | 2026-01-03 01:41:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:41:35.497912 | orchestrator | 2026-01-03 01:41:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:38.543499 | orchestrator | 2026-01-03 01:41:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:41:38.545274 | orchestrator | 2026-01-03 01:41:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:41:38.545331 | orchestrator | 2026-01-03 01:41:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:41.599207 | orchestrator | 2026-01-03 01:41:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:41:41.600593 | orchestrator | 2026-01-03 01:41:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:41:41.600646 | orchestrator | 2026-01-03 01:41:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:44.644478 | orchestrator | 2026-01-03 01:41:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:41:44.646712 | orchestrator | 2026-01-03 01:41:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:41:44.646774 | orchestrator | 2026-01-03 01:41:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:47.693252 | orchestrator | 2026-01-03 01:41:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:41:47.696503 | orchestrator | 2026-01-03 01:41:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:41:47.696560 | orchestrator | 2026-01-03 01:41:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:50.744738 | orchestrator | 2026-01-03 01:41:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:41:50.747625 | orchestrator | 2026-01-03 01:41:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:41:50.747667 | orchestrator | 2026-01-03 01:41:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:53.787229 | orchestrator | 2026-01-03 01:41:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:41:53.789624 | orchestrator | 2026-01-03 01:41:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:41:53.789688 | orchestrator | 2026-01-03 01:41:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:56.833179 | orchestrator | 2026-01-03 01:41:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:41:56.834549 | orchestrator | 2026-01-03 01:41:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:41:56.834605 | orchestrator | 2026-01-03 01:41:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:41:59.884779 | orchestrator | 2026-01-03 01:41:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:41:59.886988 | orchestrator | 2026-01-03 01:41:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:41:59.887201 | orchestrator | 2026-01-03 01:41:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:02.941339 | orchestrator | 2026-01-03 01:42:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:42:02.943205 | orchestrator | 2026-01-03 01:42:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:42:02.943277 | orchestrator | 2026-01-03 01:42:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:05.992602 | orchestrator | 2026-01-03 01:42:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:42:05.994250 | orchestrator | 2026-01-03 01:42:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:42:05.994285 | orchestrator | 2026-01-03 01:42:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:09.044598 | orchestrator | 2026-01-03 01:42:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:42:09.213891 | orchestrator | 2026-01-03 01:42:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:42:09.213980 | orchestrator | 2026-01-03 01:42:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:12.089911 | orchestrator | 2026-01-03 01:42:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:42:12.091058 | orchestrator | 2026-01-03 01:42:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:42:12.091110 | orchestrator | 2026-01-03 01:42:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:15.132656 | orchestrator | 2026-01-03 01:42:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:42:15.134293 | orchestrator | 2026-01-03 01:42:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:42:15.134357 | orchestrator | 2026-01-03 01:42:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:18.177170 | orchestrator | 2026-01-03 01:42:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:42:18.178722 | orchestrator | 2026-01-03 01:42:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:42:18.178772 | orchestrator | 2026-01-03 01:42:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:21.230874 | orchestrator | 2026-01-03 01:42:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:42:21.231929 | orchestrator | 2026-01-03 01:42:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:42:21.232002 | orchestrator | 2026-01-03 01:42:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:24.279341 | orchestrator | 2026-01-03 01:42:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:42:24.282117 | orchestrator | 2026-01-03 01:42:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:42:24.282176 | orchestrator | 2026-01-03 01:42:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:27.329767 | orchestrator | 2026-01-03 01:42:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:42:27.332143 | orchestrator | 2026-01-03 01:42:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:42:27.332188 | orchestrator | 2026-01-03 01:42:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:30.369586 | orchestrator | 2026-01-03 01:42:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:42:30.372035 | orchestrator | 2026-01-03 01:42:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:42:30.372089 | orchestrator | 2026-01-03 01:42:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:33.415494 | orchestrator | 2026-01-03 01:42:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:42:33.418101 | orchestrator | 2026-01-03 01:42:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:42:33.418273 | orchestrator | 2026-01-03 01:42:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:36.458379 | orchestrator | 2026-01-03 01:42:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:42:36.460126 | orchestrator | 2026-01-03 01:42:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:42:36.460201 | orchestrator | 2026-01-03 01:42:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:39.504876 | orchestrator | 2026-01-03 01:42:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:42:39.507627 | orchestrator | 2026-01-03 01:42:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:42:39.507703 | orchestrator | 2026-01-03 01:42:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:42.553832 | orchestrator | 2026-01-03 01:42:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:42:42.558049 | orchestrator | 2026-01-03 01:42:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:42:42.558258 | orchestrator | 2026-01-03 01:42:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:45.613261 | orchestrator | 2026-01-03 01:42:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:42:45.615234 | orchestrator | 2026-01-03 01:42:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:42:45.615269 | orchestrator | 2026-01-03 01:42:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:48.662676 | orchestrator | 2026-01-03 01:42:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:42:48.666687 | orchestrator | 2026-01-03 01:42:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:42:48.666817 | orchestrator | 2026-01-03 01:42:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:51.719163 | orchestrator | 2026-01-03 01:42:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:42:51.721640 | orchestrator | 2026-01-03 01:42:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:42:51.721746 | orchestrator | 2026-01-03 01:42:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:54.773683 | orchestrator | 2026-01-03 01:42:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:42:54.776621 | orchestrator | 2026-01-03 01:42:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:42:54.776700 | orchestrator | 2026-01-03 01:42:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:42:57.826238 | orchestrator | 2026-01-03 01:42:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:42:57.827341 | orchestrator | 2026-01-03 01:42:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:42:57.827386 | orchestrator | 2026-01-03 01:42:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:00.878703 | orchestrator | 2026-01-03 01:43:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:43:00.880946 | orchestrator | 2026-01-03 01:43:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:43:00.881052 | orchestrator | 2026-01-03 01:43:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:03.930593 | orchestrator | 2026-01-03 01:43:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:43:03.932157 | orchestrator | 2026-01-03 01:43:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:43:03.932209 | orchestrator | 2026-01-03 01:43:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:06.980681 | orchestrator | 2026-01-03 01:43:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:43:06.983050 | orchestrator | 2026-01-03 01:43:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:43:06.983133 | orchestrator | 2026-01-03 01:43:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:10.035269 | orchestrator | 2026-01-03 01:43:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:43:10.036151 | orchestrator | 2026-01-03 01:43:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:43:10.036211 | orchestrator | 2026-01-03 01:43:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:13.084093 | orchestrator | 2026-01-03 01:43:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:43:13.086841 | orchestrator | 2026-01-03 01:43:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:43:13.087172 | orchestrator | 2026-01-03 01:43:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:16.136146 | orchestrator | 2026-01-03 01:43:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:43:16.137817 | orchestrator | 2026-01-03 01:43:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:43:16.138157 | orchestrator | 2026-01-03 01:43:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:19.189425 | orchestrator | 2026-01-03 01:43:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:43:19.191030 | orchestrator | 2026-01-03 01:43:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:43:19.191291 | orchestrator | 2026-01-03 01:43:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:22.238718 | orchestrator | 2026-01-03 01:43:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:43:22.240163 | orchestrator | 2026-01-03 01:43:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:43:22.240493 | orchestrator | 2026-01-03 01:43:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:25.287689 | orchestrator | 2026-01-03 01:43:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:43:25.289466 | orchestrator | 2026-01-03 01:43:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:43:25.289550 | orchestrator | 2026-01-03 01:43:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:28.336369 | orchestrator | 2026-01-03 01:43:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:43:28.337888 | orchestrator | 2026-01-03 01:43:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:43:28.338126 | orchestrator | 2026-01-03 01:43:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:31.376572 | orchestrator | 2026-01-03 01:43:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:43:31.377745 | orchestrator | 2026-01-03 01:43:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:43:31.377801 | orchestrator | 2026-01-03 01:43:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:34.424559 | orchestrator | 2026-01-03 01:43:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:43:34.426341 | orchestrator | 2026-01-03 01:43:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:43:34.426405 | orchestrator | 2026-01-03 01:43:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:37.474351 | orchestrator | 2026-01-03 01:43:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:43:37.476315 | orchestrator | 2026-01-03 01:43:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:43:37.476409 | orchestrator | 2026-01-03 01:43:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:40.528283 | orchestrator | 2026-01-03 01:43:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:43:40.530445 | orchestrator | 2026-01-03 01:43:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:43:40.530543 | orchestrator | 2026-01-03 01:43:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:43.580442 | orchestrator | 2026-01-03 01:43:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:43:43.582383 | orchestrator | 2026-01-03 01:43:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:43:43.582448 | orchestrator | 2026-01-03 01:43:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:46.634481 | orchestrator | 2026-01-03 01:43:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:43:46.637600 | orchestrator | 2026-01-03 01:43:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:43:46.637647 | orchestrator | 2026-01-03 01:43:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:49.686085 | orchestrator | 2026-01-03 01:43:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:43:49.687623 | orchestrator | 2026-01-03 01:43:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:43:49.687654 | orchestrator | 2026-01-03 01:43:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:52.742430 | orchestrator | 2026-01-03 01:43:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:43:52.746622 | orchestrator | 2026-01-03 01:43:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:43:52.746677 | orchestrator | 2026-01-03 01:43:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:55.795245 | orchestrator | 2026-01-03 01:43:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:43:55.799600 | orchestrator | 2026-01-03 01:43:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:43:55.799674 | orchestrator | 2026-01-03 01:43:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:43:58.847754 | orchestrator | 2026-01-03 01:43:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:43:58.849983 | orchestrator | 2026-01-03 01:43:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:43:58.850118 | orchestrator | 2026-01-03 01:43:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:01.897597 | orchestrator | 2026-01-03 01:44:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:44:01.899492 | orchestrator | 2026-01-03 01:44:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:44:01.899531 | orchestrator | 2026-01-03 01:44:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:04.945925 | orchestrator | 2026-01-03 01:44:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:44:04.947850 | orchestrator | 2026-01-03 01:44:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:44:04.947911 | orchestrator | 2026-01-03 01:44:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:07.995887 | orchestrator | 2026-01-03 01:44:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:44:07.996374 | orchestrator | 2026-01-03 01:44:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:44:07.996408 | orchestrator | 2026-01-03 01:44:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:11.057811 | orchestrator | 2026-01-03 01:44:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:44:11.060337 | orchestrator | 2026-01-03 01:44:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:44:11.060487 | orchestrator | 2026-01-03 01:44:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:14.114650 | orchestrator | 2026-01-03 01:44:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:44:14.116752 | orchestrator | 2026-01-03 01:44:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:44:14.116801 | orchestrator | 2026-01-03 01:44:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:17.163770 | orchestrator | 2026-01-03 01:44:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:44:17.166327 | orchestrator | 2026-01-03 01:44:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:44:17.166377 | orchestrator | 2026-01-03 01:44:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:20.215454 | orchestrator | 2026-01-03 01:44:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:44:20.217622 | orchestrator | 2026-01-03 01:44:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:44:20.217736 | orchestrator | 2026-01-03 01:44:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:23.265135 | orchestrator | 2026-01-03 01:44:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:44:23.266955 | orchestrator | 2026-01-03 01:44:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:44:23.267009 | orchestrator | 2026-01-03 01:44:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:26.310444 | orchestrator | 2026-01-03 01:44:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:44:26.312020 | orchestrator | 2026-01-03 01:44:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:44:26.312069 | orchestrator | 2026-01-03 01:44:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:29.350747 | orchestrator | 2026-01-03 01:44:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:44:29.351377 | orchestrator | 2026-01-03 01:44:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:44:29.351479 | orchestrator | 2026-01-03 01:44:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:32.400353 | orchestrator | 2026-01-03 01:44:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:44:32.403562 | orchestrator | 2026-01-03 01:44:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:44:32.403623 | orchestrator | 2026-01-03 01:44:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:35.449694 | orchestrator | 2026-01-03 01:44:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:44:35.451970 | orchestrator | 2026-01-03 01:44:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:44:35.452051 | orchestrator | 2026-01-03 01:44:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:38.498774 | orchestrator | 2026-01-03 01:44:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:44:38.500357 | orchestrator | 2026-01-03 01:44:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:44:38.500452 | orchestrator | 2026-01-03 01:44:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:41.548452 | orchestrator | 2026-01-03 01:44:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:44:41.550656 | orchestrator | 2026-01-03 01:44:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:44:41.550763 | orchestrator | 2026-01-03 01:44:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:44.593779 | orchestrator | 2026-01-03 01:44:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:44:44.596021 | orchestrator | 2026-01-03 01:44:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:44:44.596074 | orchestrator | 2026-01-03 01:44:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:47.646119 | orchestrator | 2026-01-03 01:44:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:44:47.648673 | orchestrator | 2026-01-03 01:44:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:44:47.648731 | orchestrator | 2026-01-03 01:44:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:50.696613 | orchestrator | 2026-01-03 01:44:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:44:50.698869 | orchestrator | 2026-01-03 01:44:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:44:50.699005 | orchestrator | 2026-01-03 01:44:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:53.738850 | orchestrator | 2026-01-03 01:44:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:44:53.740710 | orchestrator | 2026-01-03 01:44:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:44:53.740858 | orchestrator | 2026-01-03 01:44:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:56.785056 | orchestrator | 2026-01-03 01:44:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:44:56.788140 | orchestrator | 2026-01-03 01:44:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:44:56.788199 | orchestrator | 2026-01-03 01:44:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:44:59.837718 | orchestrator | 2026-01-03 01:44:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:44:59.839229 | orchestrator | 2026-01-03 01:44:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:44:59.839279 | orchestrator | 2026-01-03 01:44:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:02.896076 | orchestrator | 2026-01-03 01:45:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:45:02.897462 | orchestrator | 2026-01-03 01:45:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:45:02.897506 | orchestrator | 2026-01-03 01:45:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:05.946925 | orchestrator | 2026-01-03 01:45:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:45:05.949204 | orchestrator | 2026-01-03 01:45:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:45:05.949408 | orchestrator | 2026-01-03 01:45:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:09.009254 | orchestrator | 2026-01-03 01:45:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:45:09.012570 | orchestrator | 2026-01-03 01:45:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:45:09.012646 | orchestrator | 2026-01-03 01:45:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:12.065167 | orchestrator | 2026-01-03 01:45:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:45:12.067650 | orchestrator | 2026-01-03 01:45:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:45:12.067717 | orchestrator | 2026-01-03 01:45:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:15.121242 | orchestrator | 2026-01-03 01:45:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:45:15.122604 | orchestrator | 2026-01-03 01:45:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:45:15.122651 | orchestrator | 2026-01-03 01:45:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:18.175454 | orchestrator | 2026-01-03 01:45:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:45:18.177890 | orchestrator | 2026-01-03 01:45:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:45:18.177970 | orchestrator | 2026-01-03 01:45:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:21.228347 | orchestrator | 2026-01-03 01:45:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:45:21.229881 | orchestrator | 2026-01-03 01:45:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:45:21.229910 | orchestrator | 2026-01-03 01:45:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:24.277806 | orchestrator | 2026-01-03 01:45:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:45:24.279498 | orchestrator | 2026-01-03 01:45:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:45:24.279692 | orchestrator | 2026-01-03 01:45:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:27.336144 | orchestrator | 2026-01-03 01:45:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:45:27.338728 | orchestrator | 2026-01-03 01:45:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:45:27.338803 | orchestrator | 2026-01-03 01:45:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:30.385988 | orchestrator | 2026-01-03 01:45:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:45:30.388276 | orchestrator | 2026-01-03 01:45:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:45:30.388322 | orchestrator | 2026-01-03 01:45:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:33.439014 | orchestrator | 2026-01-03 01:45:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:45:33.440574 | orchestrator | 2026-01-03 01:45:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:45:33.440706 | orchestrator | 2026-01-03 01:45:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:36.490800 | orchestrator | 2026-01-03 01:45:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:45:36.493507 | orchestrator | 2026-01-03 01:45:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:45:36.493560 | orchestrator | 2026-01-03 01:45:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:39.545285 | orchestrator | 2026-01-03 01:45:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:45:39.548291 | orchestrator | 2026-01-03 01:45:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:45:39.548359 | orchestrator | 2026-01-03 01:45:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:42.600084 | orchestrator | 2026-01-03 01:45:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:45:42.602399 | orchestrator | 2026-01-03 01:45:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:45:42.602467 | orchestrator | 2026-01-03 01:45:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:45.655201 | orchestrator | 2026-01-03 01:45:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:45:45.657670 | orchestrator | 2026-01-03 01:45:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:45:45.657864 | orchestrator | 2026-01-03 01:45:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:48.708863 | orchestrator | 2026-01-03 01:45:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:45:48.710537 | orchestrator | 2026-01-03 01:45:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:45:48.710589 | orchestrator | 2026-01-03 01:45:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:51.759193 | orchestrator | 2026-01-03 01:45:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:45:51.760268 | orchestrator | 2026-01-03 01:45:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:45:51.760310 | orchestrator | 2026-01-03 01:45:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:54.820031 | orchestrator | 2026-01-03 01:45:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:45:54.821676 | orchestrator | 2026-01-03 01:45:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:45:54.821709 | orchestrator | 2026-01-03 01:45:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:45:57.871716 | orchestrator | 2026-01-03 01:45:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:45:57.874553 | orchestrator | 2026-01-03 01:45:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:45:57.874774 | orchestrator | 2026-01-03 01:45:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:00.932637 | orchestrator | 2026-01-03 01:46:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:46:00.934896 | orchestrator | 2026-01-03 01:46:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:46:00.934990 | orchestrator | 2026-01-03 01:46:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:03.992534 | orchestrator | 2026-01-03 01:46:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:46:03.994337 | orchestrator | 2026-01-03 01:46:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:46:03.994388 | orchestrator | 2026-01-03 01:46:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:07.047075 | orchestrator | 2026-01-03 01:46:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:46:07.048460 | orchestrator | 2026-01-03 01:46:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:46:07.049996 | orchestrator | 2026-01-03 01:46:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:10.100749 | orchestrator | 2026-01-03 01:46:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:46:10.103186 | orchestrator | 2026-01-03 01:46:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:46:10.103267 | orchestrator | 2026-01-03 01:46:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:13.164489 | orchestrator | 2026-01-03 01:46:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:46:13.168685 | orchestrator | 2026-01-03 01:46:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:46:13.168744 | orchestrator | 2026-01-03 01:46:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:16.217838 | orchestrator | 2026-01-03 01:46:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:46:16.219880 | orchestrator | 2026-01-03 01:46:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:46:16.220188 | orchestrator | 2026-01-03 01:46:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:19.272365 | orchestrator | 2026-01-03 01:46:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:46:19.274440 | orchestrator | 2026-01-03 01:46:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:46:19.274481 | orchestrator | 2026-01-03 01:46:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:22.324729 | orchestrator | 2026-01-03 01:46:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:46:22.327080 | orchestrator | 2026-01-03 01:46:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:46:22.327143 | orchestrator | 2026-01-03 01:46:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:25.368892 | orchestrator | 2026-01-03 01:46:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:46:25.370797 | orchestrator | 2026-01-03 01:46:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:46:25.370885 | orchestrator | 2026-01-03 01:46:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:28.425684 | orchestrator | 2026-01-03 01:46:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:46:28.428695 | orchestrator | 2026-01-03 01:46:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:46:28.428768 | orchestrator | 2026-01-03 01:46:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:31.471970 | orchestrator | 2026-01-03 01:46:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:46:31.474053 | orchestrator | 2026-01-03 01:46:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:46:31.474121 | orchestrator | 2026-01-03 01:46:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:34.515796 | orchestrator | 2026-01-03 01:46:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:46:34.517538 | orchestrator | 2026-01-03 01:46:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:46:34.517586 | orchestrator | 2026-01-03 01:46:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:37.566611 | orchestrator | 2026-01-03 01:46:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:46:37.568650 | orchestrator | 2026-01-03 01:46:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:46:37.568756 | orchestrator | 2026-01-03 01:46:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:40.624458 | orchestrator | 2026-01-03 01:46:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:46:40.625872 | orchestrator | 2026-01-03 01:46:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:46:40.625908 | orchestrator | 2026-01-03 01:46:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:43.674952 | orchestrator | 2026-01-03 01:46:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:46:43.676643 | orchestrator | 2026-01-03 01:46:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:46:43.676766 | orchestrator | 2026-01-03 01:46:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:46.728134 | orchestrator | 2026-01-03 01:46:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:46:46.729504 | orchestrator | 2026-01-03 01:46:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:46:46.729903 | orchestrator | 2026-01-03 01:46:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:49.784383 | orchestrator | 2026-01-03 01:46:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:46:49.787160 | orchestrator | 2026-01-03 01:46:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:46:49.787247 | orchestrator | 2026-01-03 01:46:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:52.839657 | orchestrator | 2026-01-03 01:46:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:46:52.842287 | orchestrator | 2026-01-03 01:46:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:46:52.842377 | orchestrator | 2026-01-03 01:46:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:55.890854 | orchestrator | 2026-01-03 01:46:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:46:55.893059 | orchestrator | 2026-01-03 01:46:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:46:55.893125 | orchestrator | 2026-01-03 01:46:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:46:58.946330 | orchestrator | 2026-01-03 01:46:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:46:58.947923 | orchestrator | 2026-01-03 01:46:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:46:58.947968 | orchestrator | 2026-01-03 01:46:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:01.996226 | orchestrator | 2026-01-03 01:47:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:47:01.997879 | orchestrator | 2026-01-03 01:47:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:47:01.997934 | orchestrator | 2026-01-03 01:47:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:05.048384 | orchestrator | 2026-01-03 01:47:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:47:05.049937 | orchestrator | 2026-01-03 01:47:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:47:05.049982 | orchestrator | 2026-01-03 01:47:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:08.097722 | orchestrator | 2026-01-03 01:47:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:47:08.099394 | orchestrator | 2026-01-03 01:47:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:47:08.099480 | orchestrator | 2026-01-03 01:47:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:11.150675 | orchestrator | 2026-01-03 01:47:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:47:11.152041 | orchestrator | 2026-01-03 01:47:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:47:11.152075 | orchestrator | 2026-01-03 01:47:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:14.201307 | orchestrator | 2026-01-03 01:47:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:47:14.203187 | orchestrator | 2026-01-03 01:47:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:47:14.203224 | orchestrator | 2026-01-03 01:47:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:17.249478 | orchestrator | 2026-01-03 01:47:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:47:17.251620 | orchestrator | 2026-01-03 01:47:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:47:17.251843 | orchestrator | 2026-01-03 01:47:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:20.305121 | orchestrator | 2026-01-03 01:47:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:47:20.305640 | orchestrator | 2026-01-03 01:47:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:47:20.305725 | orchestrator | 2026-01-03 01:47:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:23.355914 | orchestrator | 2026-01-03 01:47:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:47:23.357990 | orchestrator | 2026-01-03 01:47:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:47:23.358269 | orchestrator | 2026-01-03 01:47:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:26.410421 | orchestrator | 2026-01-03 01:47:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:47:26.412180 | orchestrator | 2026-01-03 01:47:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:47:26.412246 | orchestrator | 2026-01-03 01:47:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:29.464435 | orchestrator | 2026-01-03 01:47:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:47:29.466368 | orchestrator | 2026-01-03 01:47:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:47:29.466559 | orchestrator | 2026-01-03 01:47:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:32.518102 | orchestrator | 2026-01-03 01:47:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:47:32.519348 | orchestrator | 2026-01-03 01:47:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:47:32.519409 | orchestrator | 2026-01-03 01:47:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:35.568532 | orchestrator | 2026-01-03 01:47:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:47:35.570510 | orchestrator | 2026-01-03 01:47:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:47:35.570606 | orchestrator | 2026-01-03 01:47:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:38.621406 | orchestrator | 2026-01-03 01:47:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:47:38.623478 | orchestrator | 2026-01-03 01:47:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:47:38.623616 | orchestrator | 2026-01-03 01:47:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:41.673098 | orchestrator | 2026-01-03 01:47:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:47:41.674258 | orchestrator | 2026-01-03 01:47:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:47:41.674294 | orchestrator | 2026-01-03 01:47:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:44.720654 | orchestrator | 2026-01-03 01:47:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:47:44.723267 | orchestrator | 2026-01-03 01:47:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:47:44.723337 | orchestrator | 2026-01-03 01:47:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:47.768630 | orchestrator | 2026-01-03 01:47:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:47:47.769587 | orchestrator | 2026-01-03 01:47:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:47:47.769632 | orchestrator | 2026-01-03 01:47:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:50.811339 | orchestrator | 2026-01-03 01:47:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:47:50.811868 | orchestrator | 2026-01-03 01:47:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:47:50.811896 | orchestrator | 2026-01-03 01:47:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:53.858212 | orchestrator | 2026-01-03 01:47:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:47:53.858500 | orchestrator | 2026-01-03 01:47:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:47:53.858800 | orchestrator | 2026-01-03 01:47:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:56.908632 | orchestrator | 2026-01-03 01:47:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:47:56.910353 | orchestrator | 2026-01-03 01:47:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:47:56.910402 | orchestrator | 2026-01-03 01:47:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:47:59.958964 | orchestrator | 2026-01-03 01:47:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:47:59.961213 | orchestrator | 2026-01-03 01:47:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:47:59.961442 | orchestrator | 2026-01-03 01:47:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:03.020670 | orchestrator | 2026-01-03 01:48:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:48:03.022420 | orchestrator | 2026-01-03 01:48:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:48:03.022500 | orchestrator | 2026-01-03 01:48:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:06.064051 | orchestrator | 2026-01-03 01:48:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:48:06.064963 | orchestrator | 2026-01-03 01:48:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:48:06.065121 | orchestrator | 2026-01-03 01:48:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:09.100758 | orchestrator | 2026-01-03 01:48:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:48:09.101217 | orchestrator | 2026-01-03 01:48:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:48:09.101490 | orchestrator | 2026-01-03 01:48:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:12.147768 | orchestrator | 2026-01-03 01:48:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:48:12.149033 | orchestrator | 2026-01-03 01:48:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:48:12.149061 | orchestrator | 2026-01-03 01:48:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:15.205344 | orchestrator | 2026-01-03 01:48:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:48:15.206425 | orchestrator | 2026-01-03 01:48:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:48:15.206478 | orchestrator | 2026-01-03 01:48:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:18.250578 | orchestrator | 2026-01-03 01:48:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:48:18.252644 | orchestrator | 2026-01-03 01:48:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:48:18.252770 | orchestrator | 2026-01-03 01:48:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:21.286611 | orchestrator | 2026-01-03 01:48:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:48:21.287228 | orchestrator | 2026-01-03 01:48:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:48:21.287530 | orchestrator | 2026-01-03 01:48:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:24.325109 | orchestrator | 2026-01-03 01:48:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:48:24.325581 | orchestrator | 2026-01-03 01:48:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:48:24.325600 | orchestrator | 2026-01-03 01:48:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:27.366493 | orchestrator | 2026-01-03 01:48:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:48:27.368793 | orchestrator | 2026-01-03 01:48:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:48:27.368855 | orchestrator | 2026-01-03 01:48:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:30.423825 | orchestrator | 2026-01-03 01:48:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:48:30.424830 | orchestrator | 2026-01-03 01:48:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:48:30.424876 | orchestrator | 2026-01-03 01:48:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:33.471751 | orchestrator | 2026-01-03 01:48:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:48:33.474322 | orchestrator | 2026-01-03 01:48:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:48:33.474387 | orchestrator | 2026-01-03 01:48:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:36.517653 | orchestrator | 2026-01-03 01:48:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:48:36.518180 | orchestrator | 2026-01-03 01:48:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:48:36.518226 | orchestrator | 2026-01-03 01:48:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:39.564631 | orchestrator | 2026-01-03 01:48:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:48:39.565842 | orchestrator | 2026-01-03 01:48:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:48:39.565894 | orchestrator | 2026-01-03 01:48:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:42.617233 | orchestrator | 2026-01-03 01:48:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:48:42.618708 | orchestrator | 2026-01-03 01:48:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:48:42.618782 | orchestrator | 2026-01-03 01:48:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:45.668180 | orchestrator | 2026-01-03 01:48:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:48:45.670169 | orchestrator | 2026-01-03 01:48:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:48:45.670238 | orchestrator | 2026-01-03 01:48:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:48.730166 | orchestrator | 2026-01-03 01:48:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:48:48.731098 | orchestrator | 2026-01-03 01:48:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:48:48.731180 | orchestrator | 2026-01-03 01:48:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:51.780813 | orchestrator | 2026-01-03 01:48:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:48:51.782577 | orchestrator | 2026-01-03 01:48:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:48:51.782618 | orchestrator | 2026-01-03 01:48:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:54.839138 | orchestrator | 2026-01-03 01:48:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:48:54.840721 | orchestrator | 2026-01-03 01:48:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:48:54.840767 | orchestrator | 2026-01-03 01:48:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:48:57.892097 | orchestrator | 2026-01-03 01:48:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:48:57.896775 | orchestrator | 2026-01-03 01:48:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:48:57.896834 | orchestrator | 2026-01-03 01:48:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:00.948472 | orchestrator | 2026-01-03 01:49:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:49:00.949165 | orchestrator | 2026-01-03 01:49:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:49:00.949193 | orchestrator | 2026-01-03 01:49:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:04.008620 | orchestrator | 2026-01-03 01:49:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:49:04.010185 | orchestrator | 2026-01-03 01:49:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:49:04.010286 | orchestrator | 2026-01-03 01:49:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:07.060565 | orchestrator | 2026-01-03 01:49:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:49:07.062358 | orchestrator | 2026-01-03 01:49:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:49:07.063718 | orchestrator | 2026-01-03 01:49:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:10.110516 | orchestrator | 2026-01-03 01:49:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:49:10.113673 | orchestrator | 2026-01-03 01:49:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:49:10.114576 | orchestrator | 2026-01-03 01:49:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:13.165814 | orchestrator | 2026-01-03 01:49:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:49:13.168417 | orchestrator | 2026-01-03 01:49:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:49:13.169232 | orchestrator | 2026-01-03 01:49:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:16.211724 | orchestrator | 2026-01-03 01:49:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:49:16.214307 | orchestrator | 2026-01-03 01:49:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:49:16.214504 | orchestrator | 2026-01-03 01:49:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:19.264180 | orchestrator | 2026-01-03 01:49:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:49:19.265948 | orchestrator | 2026-01-03 01:49:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:49:19.266203 | orchestrator | 2026-01-03 01:49:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:22.310580 | orchestrator | 2026-01-03 01:49:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:49:22.312742 | orchestrator | 2026-01-03 01:49:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:49:22.312826 | orchestrator | 2026-01-03 01:49:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:25.356367 | orchestrator | 2026-01-03 01:49:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:49:25.357977 | orchestrator | 2026-01-03 01:49:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:49:25.358086 | orchestrator | 2026-01-03 01:49:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:28.403230 | orchestrator | 2026-01-03 01:49:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:49:28.405828 | orchestrator | 2026-01-03 01:49:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:49:28.405875 | orchestrator | 2026-01-03 01:49:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:31.449731 | orchestrator | 2026-01-03 01:49:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:49:31.451066 | orchestrator | 2026-01-03 01:49:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:49:31.451235 | orchestrator | 2026-01-03 01:49:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:34.502955 | orchestrator | 2026-01-03 01:49:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:49:34.504253 | orchestrator | 2026-01-03 01:49:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:49:34.504323 | orchestrator | 2026-01-03 01:49:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:37.551629 | orchestrator | 2026-01-03 01:49:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:49:37.552936 | orchestrator | 2026-01-03 01:49:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:49:37.552969 | orchestrator | 2026-01-03 01:49:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:40.604990 | orchestrator | 2026-01-03 01:49:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:49:40.606313 | orchestrator | 2026-01-03 01:49:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:49:40.606364 | orchestrator | 2026-01-03 01:49:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:43.655093 | orchestrator | 2026-01-03 01:49:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:49:43.656904 | orchestrator | 2026-01-03 01:49:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:49:43.656957 | orchestrator | 2026-01-03 01:49:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:46.702516 | orchestrator | 2026-01-03 01:49:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:49:46.704110 | orchestrator | 2026-01-03 01:49:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:49:46.704283 | orchestrator | 2026-01-03 01:49:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:49.753609 | orchestrator | 2026-01-03 01:49:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:49:49.754946 | orchestrator | 2026-01-03 01:49:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:49:49.755025 | orchestrator | 2026-01-03 01:49:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:52.791548 | orchestrator | 2026-01-03 01:49:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:49:52.793649 | orchestrator | 2026-01-03 01:49:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:49:52.793734 | orchestrator | 2026-01-03 01:49:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:55.838532 | orchestrator | 2026-01-03 01:49:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:49:55.840717 | orchestrator | 2026-01-03 01:49:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:49:55.840797 | orchestrator | 2026-01-03 01:49:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:49:58.890398 | orchestrator | 2026-01-03 01:49:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:49:58.891241 | orchestrator | 2026-01-03 01:49:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:49:58.891293 | orchestrator | 2026-01-03 01:49:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:01.943269 | orchestrator | 2026-01-03 01:50:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:50:01.946829 | orchestrator | 2026-01-03 01:50:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:50:01.946881 | orchestrator | 2026-01-03 01:50:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:04.998217 | orchestrator | 2026-01-03 01:50:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:50:04.999535 | orchestrator | 2026-01-03 01:50:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:50:04.999587 | orchestrator | 2026-01-03 01:50:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:08.052538 | orchestrator | 2026-01-03 01:50:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:50:08.054951 | orchestrator | 2026-01-03 01:50:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:50:08.055095 | orchestrator | 2026-01-03 01:50:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:11.099028 | orchestrator | 2026-01-03 01:50:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:50:11.100969 | orchestrator | 2026-01-03 01:50:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:50:11.101061 | orchestrator | 2026-01-03 01:50:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:14.152147 | orchestrator | 2026-01-03 01:50:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:50:14.153709 | orchestrator | 2026-01-03 01:50:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:50:14.153761 | orchestrator | 2026-01-03 01:50:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:17.209347 | orchestrator | 2026-01-03 01:50:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:50:17.211582 | orchestrator | 2026-01-03 01:50:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:50:17.211621 | orchestrator | 2026-01-03 01:50:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:20.267575 | orchestrator | 2026-01-03 01:50:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:50:20.269362 | orchestrator | 2026-01-03 01:50:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:50:20.269434 | orchestrator | 2026-01-03 01:50:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:23.316467 | orchestrator | 2026-01-03 01:50:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:50:23.317895 | orchestrator | 2026-01-03 01:50:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:50:23.317953 | orchestrator | 2026-01-03 01:50:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:26.368100 | orchestrator | 2026-01-03 01:50:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:50:26.369673 | orchestrator | 2026-01-03 01:50:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:50:26.369718 | orchestrator | 2026-01-03 01:50:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:29.424791 | orchestrator | 2026-01-03 01:50:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:50:29.427279 | orchestrator | 2026-01-03 01:50:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:50:29.427460 | orchestrator | 2026-01-03 01:50:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:32.472423 | orchestrator | 2026-01-03 01:50:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:50:32.474168 | orchestrator | 2026-01-03 01:50:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:50:32.474232 | orchestrator | 2026-01-03 01:50:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:35.525747 | orchestrator | 2026-01-03 01:50:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:50:35.527414 | orchestrator | 2026-01-03 01:50:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:50:35.527461 | orchestrator | 2026-01-03 01:50:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:38.574612 | orchestrator | 2026-01-03 01:50:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:50:38.577111 | orchestrator | 2026-01-03 01:50:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:50:38.577165 | orchestrator | 2026-01-03 01:50:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:41.627795 | orchestrator | 2026-01-03 01:50:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:50:41.629601 | orchestrator | 2026-01-03 01:50:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:50:41.629719 | orchestrator | 2026-01-03 01:50:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:44.688383 | orchestrator | 2026-01-03 01:50:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:50:44.689224 | orchestrator | 2026-01-03 01:50:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:50:44.689267 | orchestrator | 2026-01-03 01:50:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:47.736822 | orchestrator | 2026-01-03 01:50:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:50:47.738296 | orchestrator | 2026-01-03 01:50:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:50:47.738358 | orchestrator | 2026-01-03 01:50:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:50.788664 | orchestrator | 2026-01-03 01:50:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:50:50.790745 | orchestrator | 2026-01-03 01:50:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:50:50.790789 | orchestrator | 2026-01-03 01:50:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:53.836938 | orchestrator | 2026-01-03 01:50:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:50:53.838315 | orchestrator | 2026-01-03 01:50:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:50:53.838351 | orchestrator | 2026-01-03 01:50:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:56.892223 | orchestrator | 2026-01-03 01:50:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:50:56.894076 | orchestrator | 2026-01-03 01:50:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:50:56.894466 | orchestrator | 2026-01-03 01:50:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:50:59.943514 | orchestrator | 2026-01-03 01:50:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:50:59.944924 | orchestrator | 2026-01-03 01:50:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:50:59.944979 | orchestrator | 2026-01-03 01:50:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:02.995654 | orchestrator | 2026-01-03 01:51:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:51:02.997065 | orchestrator | 2026-01-03 01:51:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:51:02.997120 | orchestrator | 2026-01-03 01:51:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:06.054643 | orchestrator | 2026-01-03 01:51:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:51:06.056827 | orchestrator | 2026-01-03 01:51:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:51:06.057043 | orchestrator | 2026-01-03 01:51:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:09.104976 | orchestrator | 2026-01-03 01:51:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:51:09.107135 | orchestrator | 2026-01-03 01:51:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:51:09.107279 | orchestrator | 2026-01-03 01:51:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:12.157624 | orchestrator | 2026-01-03 01:51:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:51:12.159393 | orchestrator | 2026-01-03 01:51:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:51:12.159623 | orchestrator | 2026-01-03 01:51:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:15.206433 | orchestrator | 2026-01-03 01:51:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:51:15.209435 | orchestrator | 2026-01-03 01:51:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:51:15.209497 | orchestrator | 2026-01-03 01:51:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:18.256038 | orchestrator | 2026-01-03 01:51:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:51:18.258407 | orchestrator | 2026-01-03 01:51:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:51:18.258522 | orchestrator | 2026-01-03 01:51:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:21.306572 | orchestrator | 2026-01-03 01:51:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:51:21.308600 | orchestrator | 2026-01-03 01:51:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:51:21.308649 | orchestrator | 2026-01-03 01:51:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:24.347983 | orchestrator | 2026-01-03 01:51:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:51:24.349595 | orchestrator | 2026-01-03 01:51:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:51:24.349657 | orchestrator | 2026-01-03 01:51:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:27.398221 | orchestrator | 2026-01-03 01:51:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:51:27.400122 | orchestrator | 2026-01-03 01:51:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:51:27.400210 | orchestrator | 2026-01-03 01:51:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:30.442784 | orchestrator | 2026-01-03 01:51:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:51:30.443778 | orchestrator | 2026-01-03 01:51:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:51:30.443945 | orchestrator | 2026-01-03 01:51:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:33.493291 | orchestrator | 2026-01-03 01:51:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:51:33.496487 | orchestrator | 2026-01-03 01:51:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:51:33.496546 | orchestrator | 2026-01-03 01:51:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:36.548541 | orchestrator | 2026-01-03 01:51:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:51:36.549429 | orchestrator | 2026-01-03 01:51:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:51:36.549593 | orchestrator | 2026-01-03 01:51:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:39.599900 | orchestrator | 2026-01-03 01:51:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:51:39.602314 | orchestrator | 2026-01-03 01:51:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:51:39.602389 | orchestrator | 2026-01-03 01:51:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:42.650924 | orchestrator | 2026-01-03 01:51:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:51:42.654371 | orchestrator | 2026-01-03 01:51:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:51:42.655339 | orchestrator | 2026-01-03 01:51:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:45.704091 | orchestrator | 2026-01-03 01:51:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:51:45.707427 | orchestrator | 2026-01-03 01:51:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:51:45.707500 | orchestrator | 2026-01-03 01:51:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:48.756281 | orchestrator | 2026-01-03 01:51:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:51:48.759958 | orchestrator | 2026-01-03 01:51:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:51:48.760039 | orchestrator | 2026-01-03 01:51:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:51.812306 | orchestrator | 2026-01-03 01:51:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:51:51.816915 | orchestrator | 2026-01-03 01:51:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:51:51.818249 | orchestrator | 2026-01-03 01:51:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:54.860467 | orchestrator | 2026-01-03 01:51:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:51:54.861937 | orchestrator | 2026-01-03 01:51:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:51:54.861983 | orchestrator | 2026-01-03 01:51:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:51:57.907522 | orchestrator | 2026-01-03 01:51:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:51:57.908739 | orchestrator | 2026-01-03 01:51:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:51:57.908785 | orchestrator | 2026-01-03 01:51:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:00.957014 | orchestrator | 2026-01-03 01:52:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:52:00.957918 | orchestrator | 2026-01-03 01:52:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:52:00.958496 | orchestrator | 2026-01-03 01:52:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:04.009569 | orchestrator | 2026-01-03 01:52:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:52:04.011481 | orchestrator | 2026-01-03 01:52:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:52:04.011585 | orchestrator | 2026-01-03 01:52:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:07.065790 | orchestrator | 2026-01-03 01:52:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:52:07.067841 | orchestrator | 2026-01-03 01:52:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:52:07.067893 | orchestrator | 2026-01-03 01:52:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:10.124413 | orchestrator | 2026-01-03 01:52:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:52:10.125901 | orchestrator | 2026-01-03 01:52:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:52:10.125936 | orchestrator | 2026-01-03 01:52:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:13.180137 | orchestrator | 2026-01-03 01:52:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:52:13.181329 | orchestrator | 2026-01-03 01:52:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:52:13.181380 | orchestrator | 2026-01-03 01:52:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:16.227946 | orchestrator | 2026-01-03 01:52:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:52:16.229364 | orchestrator | 2026-01-03 01:52:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:52:16.229421 | orchestrator | 2026-01-03 01:52:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:19.283153 | orchestrator | 2026-01-03 01:52:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:52:19.285027 | orchestrator | 2026-01-03 01:52:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:52:19.285114 | orchestrator | 2026-01-03 01:52:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:22.330959 | orchestrator | 2026-01-03 01:52:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:52:22.332613 | orchestrator | 2026-01-03 01:52:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:52:22.332665 | orchestrator | 2026-01-03 01:52:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:25.380713 | orchestrator | 2026-01-03 01:52:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:52:25.382389 | orchestrator | 2026-01-03 01:52:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:52:25.382465 | orchestrator | 2026-01-03 01:52:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:28.434143 | orchestrator | 2026-01-03 01:52:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:52:28.434251 | orchestrator | 2026-01-03 01:52:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:52:28.434267 | orchestrator | 2026-01-03 01:52:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:31.480481 | orchestrator | 2026-01-03 01:52:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:52:31.482690 | orchestrator | 2026-01-03 01:52:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:52:31.482775 | orchestrator | 2026-01-03 01:52:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:34.530273 | orchestrator | 2026-01-03 01:52:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:52:34.532017 | orchestrator | 2026-01-03 01:52:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:52:34.532089 | orchestrator | 2026-01-03 01:52:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:37.585745 | orchestrator | 2026-01-03 01:52:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:52:37.586804 | orchestrator | 2026-01-03 01:52:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:52:37.586987 | orchestrator | 2026-01-03 01:52:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:40.639391 | orchestrator | 2026-01-03 01:52:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:52:40.640830 | orchestrator | 2026-01-03 01:52:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:52:40.640889 | orchestrator | 2026-01-03 01:52:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:43.692579 | orchestrator | 2026-01-03 01:52:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:52:43.694462 | orchestrator | 2026-01-03 01:52:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:52:43.694526 | orchestrator | 2026-01-03 01:52:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:46.743688 | orchestrator | 2026-01-03 01:52:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:52:46.745198 | orchestrator | 2026-01-03 01:52:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:52:46.745378 | orchestrator | 2026-01-03 01:52:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:49.792971 | orchestrator | 2026-01-03 01:52:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:52:49.797905 | orchestrator | 2026-01-03 01:52:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:52:49.798109 | orchestrator | 2026-01-03 01:52:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:52.849465 | orchestrator | 2026-01-03 01:52:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:52:52.852814 | orchestrator | 2026-01-03 01:52:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:52:52.852884 | orchestrator | 2026-01-03 01:52:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:55.917876 | orchestrator | 2026-01-03 01:52:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:52:55.918131 | orchestrator | 2026-01-03 01:52:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:52:55.918150 | orchestrator | 2026-01-03 01:52:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:52:59.023507 | orchestrator | 2026-01-03 01:52:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:52:59.024758 | orchestrator | 2026-01-03 01:52:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:52:59.024809 | orchestrator | 2026-01-03 01:52:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:02.080324 | orchestrator | 2026-01-03 01:53:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:53:02.081866 | orchestrator | 2026-01-03 01:53:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:53:02.082166 | orchestrator | 2026-01-03 01:53:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:05.136691 | orchestrator | 2026-01-03 01:53:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:53:05.139038 | orchestrator | 2026-01-03 01:53:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:53:05.139135 | orchestrator | 2026-01-03 01:53:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:08.183813 | orchestrator | 2026-01-03 01:53:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:53:08.185088 | orchestrator | 2026-01-03 01:53:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:53:08.185144 | orchestrator | 2026-01-03 01:53:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:11.231548 | orchestrator | 2026-01-03 01:53:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:53:11.232831 | orchestrator | 2026-01-03 01:53:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:53:11.232920 | orchestrator | 2026-01-03 01:53:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:14.276468 | orchestrator | 2026-01-03 01:53:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:53:14.277855 | orchestrator | 2026-01-03 01:53:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:53:14.278887 | orchestrator | 2026-01-03 01:53:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:17.327637 | orchestrator | 2026-01-03 01:53:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:53:17.330230 | orchestrator | 2026-01-03 01:53:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:53:17.330281 | orchestrator | 2026-01-03 01:53:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:20.377370 | orchestrator | 2026-01-03 01:53:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:53:20.379800 | orchestrator | 2026-01-03 01:53:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:53:20.379850 | orchestrator | 2026-01-03 01:53:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:23.432449 | orchestrator | 2026-01-03 01:53:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:53:23.436559 | orchestrator | 2026-01-03 01:53:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:53:23.436675 | orchestrator | 2026-01-03 01:53:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:26.490261 | orchestrator | 2026-01-03 01:53:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:53:26.493046 | orchestrator | 2026-01-03 01:53:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:53:26.493777 | orchestrator | 2026-01-03 01:53:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:29.543738 | orchestrator | 2026-01-03 01:53:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:53:29.545328 | orchestrator | 2026-01-03 01:53:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:53:29.545461 | orchestrator | 2026-01-03 01:53:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:32.594526 | orchestrator | 2026-01-03 01:53:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:53:32.596695 | orchestrator | 2026-01-03 01:53:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:53:32.596752 | orchestrator | 2026-01-03 01:53:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:35.644924 | orchestrator | 2026-01-03 01:53:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:53:35.646329 | orchestrator | 2026-01-03 01:53:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:53:35.646364 | orchestrator | 2026-01-03 01:53:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:38.696110 | orchestrator | 2026-01-03 01:53:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:53:38.702327 | orchestrator | 2026-01-03 01:53:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:53:38.702411 | orchestrator | 2026-01-03 01:53:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:41.752676 | orchestrator | 2026-01-03 01:53:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:53:41.755656 | orchestrator | 2026-01-03 01:53:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:53:41.756045 | orchestrator | 2026-01-03 01:53:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:44.805131 | orchestrator | 2026-01-03 01:53:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:53:44.807780 | orchestrator | 2026-01-03 01:53:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:53:44.807852 | orchestrator | 2026-01-03 01:53:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:47.854272 | orchestrator | 2026-01-03 01:53:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:53:47.855330 | orchestrator | 2026-01-03 01:53:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:53:47.855377 | orchestrator | 2026-01-03 01:53:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:50.898080 | orchestrator | 2026-01-03 01:53:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:53:50.899304 | orchestrator | 2026-01-03 01:53:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:53:50.899491 | orchestrator | 2026-01-03 01:53:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:53.950540 | orchestrator | 2026-01-03 01:53:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:53:53.953551 | orchestrator | 2026-01-03 01:53:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:53:53.953629 | orchestrator | 2026-01-03 01:53:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:53:57.005152 | orchestrator | 2026-01-03 01:53:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:53:57.006451 | orchestrator | 2026-01-03 01:53:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:53:57.006505 | orchestrator | 2026-01-03 01:53:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:00.054193 | orchestrator | 2026-01-03 01:54:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:54:00.055570 | orchestrator | 2026-01-03 01:54:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:54:00.055644 | orchestrator | 2026-01-03 01:54:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:03.106901 | orchestrator | 2026-01-03 01:54:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:54:03.109272 | orchestrator | 2026-01-03 01:54:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:54:03.109325 | orchestrator | 2026-01-03 01:54:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:06.158580 | orchestrator | 2026-01-03 01:54:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:54:06.160338 | orchestrator | 2026-01-03 01:54:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:54:06.160363 | orchestrator | 2026-01-03 01:54:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:09.208027 | orchestrator | 2026-01-03 01:54:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:54:09.210532 | orchestrator | 2026-01-03 01:54:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:54:09.210657 | orchestrator | 2026-01-03 01:54:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:12.267347 | orchestrator | 2026-01-03 01:54:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:54:12.268503 | orchestrator | 2026-01-03 01:54:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:54:12.268602 | orchestrator | 2026-01-03 01:54:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:15.318493 | orchestrator | 2026-01-03 01:54:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:54:15.319673 | orchestrator | 2026-01-03 01:54:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:54:15.319732 | orchestrator | 2026-01-03 01:54:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:18.369362 | orchestrator | 2026-01-03 01:54:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:54:18.370878 | orchestrator | 2026-01-03 01:54:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:54:18.371170 | orchestrator | 2026-01-03 01:54:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:21.417000 | orchestrator | 2026-01-03 01:54:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:54:21.418471 | orchestrator | 2026-01-03 01:54:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:54:21.418557 | orchestrator | 2026-01-03 01:54:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:24.466853 | orchestrator | 2026-01-03 01:54:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:54:24.469119 | orchestrator | 2026-01-03 01:54:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:54:24.469157 | orchestrator | 2026-01-03 01:54:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:27.524957 | orchestrator | 2026-01-03 01:54:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:54:27.526574 | orchestrator | 2026-01-03 01:54:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:54:27.526621 | orchestrator | 2026-01-03 01:54:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:30.579291 | orchestrator | 2026-01-03 01:54:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:54:30.581527 | orchestrator | 2026-01-03 01:54:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:54:30.581635 | orchestrator | 2026-01-03 01:54:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:33.635173 | orchestrator | 2026-01-03 01:54:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:54:33.637560 | orchestrator | 2026-01-03 01:54:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:54:33.637785 | orchestrator | 2026-01-03 01:54:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:36.688341 | orchestrator | 2026-01-03 01:54:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:54:36.689700 | orchestrator | 2026-01-03 01:54:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:54:36.689743 | orchestrator | 2026-01-03 01:54:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:39.735286 | orchestrator | 2026-01-03 01:54:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:54:39.737457 | orchestrator | 2026-01-03 01:54:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:54:39.737559 | orchestrator | 2026-01-03 01:54:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:42.789089 | orchestrator | 2026-01-03 01:54:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:54:42.790445 | orchestrator | 2026-01-03 01:54:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:54:42.790484 | orchestrator | 2026-01-03 01:54:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:45.841414 | orchestrator | 2026-01-03 01:54:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:54:45.843777 | orchestrator | 2026-01-03 01:54:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:54:45.843899 | orchestrator | 2026-01-03 01:54:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:48.894873 | orchestrator | 2026-01-03 01:54:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:54:48.899273 | orchestrator | 2026-01-03 01:54:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:54:48.899355 | orchestrator | 2026-01-03 01:54:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:51.955875 | orchestrator | 2026-01-03 01:54:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:54:51.957278 | orchestrator | 2026-01-03 01:54:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:54:51.957370 | orchestrator | 2026-01-03 01:54:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:55.027556 | orchestrator | 2026-01-03 01:54:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:54:55.028136 | orchestrator | 2026-01-03 01:54:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:54:55.028202 | orchestrator | 2026-01-03 01:54:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:54:58.075740 | orchestrator | 2026-01-03 01:54:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:54:58.077503 | orchestrator | 2026-01-03 01:54:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:54:58.077567 | orchestrator | 2026-01-03 01:54:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:01.126546 | orchestrator | 2026-01-03 01:55:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:55:01.127497 | orchestrator | 2026-01-03 01:55:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:55:01.127538 | orchestrator | 2026-01-03 01:55:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:04.175266 | orchestrator | 2026-01-03 01:55:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:55:04.178642 | orchestrator | 2026-01-03 01:55:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:55:04.178762 | orchestrator | 2026-01-03 01:55:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:07.228673 | orchestrator | 2026-01-03 01:55:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:55:07.230158 | orchestrator | 2026-01-03 01:55:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:55:07.230260 | orchestrator | 2026-01-03 01:55:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:10.279633 | orchestrator | 2026-01-03 01:55:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:55:10.281215 | orchestrator | 2026-01-03 01:55:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:55:10.281270 | orchestrator | 2026-01-03 01:55:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:13.337174 | orchestrator | 2026-01-03 01:55:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:55:13.338498 | orchestrator | 2026-01-03 01:55:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:55:13.338650 | orchestrator | 2026-01-03 01:55:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:16.394847 | orchestrator | 2026-01-03 01:55:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:55:16.396586 | orchestrator | 2026-01-03 01:55:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:55:16.396636 | orchestrator | 2026-01-03 01:55:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:19.446946 | orchestrator | 2026-01-03 01:55:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:55:19.449130 | orchestrator | 2026-01-03 01:55:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:55:19.449270 | orchestrator | 2026-01-03 01:55:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:22.498501 | orchestrator | 2026-01-03 01:55:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:55:22.500420 | orchestrator | 2026-01-03 01:55:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:55:22.500478 | orchestrator | 2026-01-03 01:55:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:25.550369 | orchestrator | 2026-01-03 01:55:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:55:25.552125 | orchestrator | 2026-01-03 01:55:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:55:25.552173 | orchestrator | 2026-01-03 01:55:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:28.602141 | orchestrator | 2026-01-03 01:55:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:55:28.602631 | orchestrator | 2026-01-03 01:55:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:55:28.602678 | orchestrator | 2026-01-03 01:55:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:31.653045 | orchestrator | 2026-01-03 01:55:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:55:31.655062 | orchestrator | 2026-01-03 01:55:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:55:31.655144 | orchestrator | 2026-01-03 01:55:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:34.704094 | orchestrator | 2026-01-03 01:55:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:55:34.705737 | orchestrator | 2026-01-03 01:55:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:55:34.705834 | orchestrator | 2026-01-03 01:55:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:37.747553 | orchestrator | 2026-01-03 01:55:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:55:37.747714 | orchestrator | 2026-01-03 01:55:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:55:37.747724 | orchestrator | 2026-01-03 01:55:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:40.792036 | orchestrator | 2026-01-03 01:55:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:55:40.793840 | orchestrator | 2026-01-03 01:55:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:55:40.793886 | orchestrator | 2026-01-03 01:55:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:43.848886 | orchestrator | 2026-01-03 01:55:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:55:43.850521 | orchestrator | 2026-01-03 01:55:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:55:43.850575 | orchestrator | 2026-01-03 01:55:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:46.897892 | orchestrator | 2026-01-03 01:55:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:55:46.900453 | orchestrator | 2026-01-03 01:55:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:55:46.900515 | orchestrator | 2026-01-03 01:55:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:49.952995 | orchestrator | 2026-01-03 01:55:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:55:49.954000 | orchestrator | 2026-01-03 01:55:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:55:49.954583 | orchestrator | 2026-01-03 01:55:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:52.997387 | orchestrator | 2026-01-03 01:55:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:55:52.999727 | orchestrator | 2026-01-03 01:55:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:55:52.999829 | orchestrator | 2026-01-03 01:55:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:56.039565 | orchestrator | 2026-01-03 01:55:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:55:56.041536 | orchestrator | 2026-01-03 01:55:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:55:56.041588 | orchestrator | 2026-01-03 01:55:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:55:59.084113 | orchestrator | 2026-01-03 01:55:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:55:59.085235 | orchestrator | 2026-01-03 01:55:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:55:59.085272 | orchestrator | 2026-01-03 01:55:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:02.136212 | orchestrator | 2026-01-03 01:56:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:56:02.137543 | orchestrator | 2026-01-03 01:56:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:56:02.137576 | orchestrator | 2026-01-03 01:56:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:05.188874 | orchestrator | 2026-01-03 01:56:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:56:05.191129 | orchestrator | 2026-01-03 01:56:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:56:05.191205 | orchestrator | 2026-01-03 01:56:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:08.237241 | orchestrator | 2026-01-03 01:56:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:56:08.239399 | orchestrator | 2026-01-03 01:56:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:56:08.239456 | orchestrator | 2026-01-03 01:56:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:11.279897 | orchestrator | 2026-01-03 01:56:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:56:11.281292 | orchestrator | 2026-01-03 01:56:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:56:11.281366 | orchestrator | 2026-01-03 01:56:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:14.328510 | orchestrator | 2026-01-03 01:56:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:56:14.329365 | orchestrator | 2026-01-03 01:56:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:56:14.329413 | orchestrator | 2026-01-03 01:56:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:17.380794 | orchestrator | 2026-01-03 01:56:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:56:17.383266 | orchestrator | 2026-01-03 01:56:17[0m | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:56:17.383447 | orchestrator | 2026-01-03 01:56:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:20.431525 | orchestrator | 2026-01-03 01:56:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:56:20.432154 | orchestrator | 2026-01-03 01:56:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:56:20.432253 | orchestrator | 2026-01-03 01:56:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:23.481700 | orchestrator | 2026-01-03 01:56:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:56:23.483799 | orchestrator | 2026-01-03 01:56:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:56:23.483853 | orchestrator | 2026-01-03 01:56:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:26.547466 | orchestrator | 2026-01-03 01:56:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:56:26.549357 | orchestrator | 2026-01-03 01:56:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:56:26.549538 | orchestrator | 2026-01-03 01:56:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:29.601446 | orchestrator | 2026-01-03 01:56:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:56:29.603051 | orchestrator | 2026-01-03 01:56:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:56:29.603115 | orchestrator | 2026-01-03 01:56:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:32.646174 | orchestrator | 2026-01-03 01:56:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:56:32.647616 | orchestrator | 2026-01-03 01:56:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:56:32.647669 | orchestrator | 2026-01-03 01:56:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:35.696513 | orchestrator | 2026-01-03 01:56:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:56:35.698134 | orchestrator | 2026-01-03 01:56:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:56:35.698202 | orchestrator | 2026-01-03 01:56:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:38.747490 | orchestrator | 2026-01-03 01:56:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:56:38.749601 | orchestrator | 2026-01-03 01:56:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:56:38.750067 | orchestrator | 2026-01-03 01:56:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:41.805155 | orchestrator | 2026-01-03 01:56:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:56:41.808455 | orchestrator | 2026-01-03 01:56:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:56:41.808514 | orchestrator | 2026-01-03 01:56:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:44.860605 | orchestrator | 2026-01-03 01:56:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:56:44.862167 | orchestrator | 2026-01-03 01:56:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:56:44.862215 | orchestrator | 2026-01-03 01:56:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:47.910641 | orchestrator | 2026-01-03 01:56:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:56:47.912192 | orchestrator | 2026-01-03 01:56:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:56:47.912248 | orchestrator | 2026-01-03 01:56:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:50.949270 | orchestrator | 2026-01-03 01:56:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:56:50.950141 | orchestrator | 2026-01-03 01:56:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:56:50.950177 | orchestrator | 2026-01-03 01:56:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:53.995783 | orchestrator | 2026-01-03 01:56:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:56:53.997584 | orchestrator | 2026-01-03 01:56:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:56:53.997635 | orchestrator | 2026-01-03 01:56:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:56:57.043643 | orchestrator | 2026-01-03 01:56:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:56:57.045515 | orchestrator | 2026-01-03 01:56:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:56:57.045647 | orchestrator | 2026-01-03 01:56:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:00.095611 | orchestrator | 2026-01-03 01:57:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:57:00.098286 | orchestrator | 2026-01-03 01:57:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:57:00.098356 | orchestrator | 2026-01-03 01:57:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:03.140308 | orchestrator | 2026-01-03 01:57:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:57:03.142713 | orchestrator | 2026-01-03 01:57:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:57:03.142776 | orchestrator | 2026-01-03 01:57:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:06.188703 | orchestrator | 2026-01-03 01:57:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:57:06.190297 | orchestrator | 2026-01-03 01:57:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:57:06.190352 | orchestrator | 2026-01-03 01:57:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:09.238086 | orchestrator | 2026-01-03 01:57:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:57:09.239512 | orchestrator | 2026-01-03 01:57:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:57:09.239710 | orchestrator | 2026-01-03 01:57:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:12.284370 | orchestrator | 2026-01-03 01:57:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:57:12.286170 | orchestrator | 2026-01-03 01:57:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:57:12.286260 | orchestrator | 2026-01-03 01:57:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:15.335038 | orchestrator | 2026-01-03 01:57:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:57:15.336433 | orchestrator | 2026-01-03 01:57:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:57:15.336535 | orchestrator | 2026-01-03 01:57:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:18.386312 | orchestrator | 2026-01-03 01:57:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:57:18.388472 | orchestrator | 2026-01-03 01:57:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:57:18.388542 | orchestrator | 2026-01-03 01:57:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:21.435060 | orchestrator | 2026-01-03 01:57:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:57:21.437500 | orchestrator | 2026-01-03 01:57:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:57:21.437555 | orchestrator | 2026-01-03 01:57:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:24.488382 | orchestrator | 2026-01-03 01:57:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:57:24.489566 | orchestrator | 2026-01-03 01:57:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:57:24.489740 | orchestrator | 2026-01-03 01:57:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:27.538888 | orchestrator | 2026-01-03 01:57:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:57:27.540186 | orchestrator | 2026-01-03 01:57:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:57:27.540218 | orchestrator | 2026-01-03 01:57:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:30.592029 | orchestrator | 2026-01-03 01:57:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:57:30.593048 | orchestrator | 2026-01-03 01:57:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:57:30.593072 | orchestrator | 2026-01-03 01:57:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:33.638061 | orchestrator | 2026-01-03 01:57:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:57:33.640287 | orchestrator | 2026-01-03 01:57:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:57:33.640362 | orchestrator | 2026-01-03 01:57:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:36.693844 | orchestrator | 2026-01-03 01:57:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:57:36.696878 | orchestrator | 2026-01-03 01:57:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:57:36.697034 | orchestrator | 2026-01-03 01:57:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:39.744466 | orchestrator | 2026-01-03 01:57:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:57:39.747931 | orchestrator | 2026-01-03 01:57:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:57:39.748337 | orchestrator | 2026-01-03 01:57:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:42.791358 | orchestrator | 2026-01-03 01:57:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:57:42.792666 | orchestrator | 2026-01-03 01:57:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:57:42.792697 | orchestrator | 2026-01-03 01:57:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:45.841723 | orchestrator | 2026-01-03 01:57:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:57:45.843563 | orchestrator | 2026-01-03 01:57:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:57:45.843680 | orchestrator | 2026-01-03 01:57:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:48.894008 | orchestrator | 2026-01-03 01:57:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:57:48.896033 | orchestrator | 2026-01-03 01:57:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:57:48.896142 | orchestrator | 2026-01-03 01:57:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:51.947496 | orchestrator | 2026-01-03 01:57:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:57:51.950241 | orchestrator | 2026-01-03 01:57:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:57:51.950354 | orchestrator | 2026-01-03 01:57:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:55.003066 | orchestrator | 2026-01-03 01:57:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:57:55.005228 | orchestrator | 2026-01-03 01:57:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:57:55.005293 | orchestrator | 2026-01-03 01:57:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:57:58.061276 | orchestrator | 2026-01-03 01:57:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:57:58.063762 | orchestrator | 2026-01-03 01:57:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:57:58.063828 | orchestrator | 2026-01-03 01:57:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:01.111888 | orchestrator | 2026-01-03 01:58:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:58:01.115241 | orchestrator | 2026-01-03 01:58:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:58:01.115444 | orchestrator | 2026-01-03 01:58:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:04.166454 | orchestrator | 2026-01-03 01:58:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:58:04.168229 | orchestrator | 2026-01-03 01:58:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:58:04.168291 | orchestrator | 2026-01-03 01:58:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:07.216742 | orchestrator | 2026-01-03 01:58:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:58:07.218341 | orchestrator | 2026-01-03 01:58:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:58:07.218602 | orchestrator | 2026-01-03 01:58:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:10.265536 | orchestrator | 2026-01-03 01:58:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:58:10.268201 | orchestrator | 2026-01-03 01:58:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:58:10.268308 | orchestrator | 2026-01-03 01:58:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:13.322200 | orchestrator | 2026-01-03 01:58:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:58:13.324106 | orchestrator | 2026-01-03 01:58:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:58:13.324150 | orchestrator | 2026-01-03 01:58:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:16.370675 | orchestrator | 2026-01-03 01:58:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:58:16.371661 | orchestrator | 2026-01-03 01:58:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:58:16.371707 | orchestrator | 2026-01-03 01:58:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:19.420735 | orchestrator | 2026-01-03 01:58:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:58:19.421912 | orchestrator | 2026-01-03 01:58:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:58:19.422000 | orchestrator | 2026-01-03 01:58:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:22.462802 | orchestrator | 2026-01-03 01:58:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:58:22.465234 | orchestrator | 2026-01-03 01:58:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:58:22.465502 | orchestrator | 2026-01-03 01:58:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:25.514210 | orchestrator | 2026-01-03 01:58:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:58:25.516910 | orchestrator | 2026-01-03 01:58:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:58:25.517012 | orchestrator | 2026-01-03 01:58:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:28.567001 | orchestrator | 2026-01-03 01:58:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:58:28.568401 | orchestrator | 2026-01-03 01:58:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:58:28.568539 | orchestrator | 2026-01-03 01:58:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:31.617124 | orchestrator | 2026-01-03 01:58:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:58:31.619651 | orchestrator | 2026-01-03 01:58:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:58:31.620347 | orchestrator | 2026-01-03 01:58:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:34.672447 | orchestrator | 2026-01-03 01:58:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:58:34.674918 | orchestrator | 2026-01-03 01:58:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:58:34.675012 | orchestrator | 2026-01-03 01:58:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:37.727821 | orchestrator | 2026-01-03 01:58:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:58:37.730078 | orchestrator | 2026-01-03 01:58:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:58:37.730155 | orchestrator | 2026-01-03 01:58:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:40.787398 | orchestrator | 2026-01-03 01:58:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:58:40.789497 | orchestrator | 2026-01-03 01:58:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:58:40.789542 | orchestrator | 2026-01-03 01:58:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:43.838994 | orchestrator | 2026-01-03 01:58:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:58:43.841522 | orchestrator | 2026-01-03 01:58:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:58:43.841575 | orchestrator | 2026-01-03 01:58:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:46.904024 | orchestrator | 2026-01-03 01:58:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:58:46.906492 | orchestrator | 2026-01-03 01:58:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:58:46.906609 | orchestrator | 2026-01-03 01:58:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:49.964484 | orchestrator | 2026-01-03 01:58:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:58:49.966586 | orchestrator | 2026-01-03 01:58:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:58:49.966649 | orchestrator | 2026-01-03 01:58:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:53.015094 | orchestrator | 2026-01-03 01:58:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:58:53.015322 | orchestrator | 2026-01-03 01:58:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:58:53.015342 | orchestrator | 2026-01-03 01:58:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:56.051741 | orchestrator | 2026-01-03 01:58:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:58:56.051858 | orchestrator | 2026-01-03 01:58:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:58:56.051870 | orchestrator | 2026-01-03 01:58:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:58:59.099991 | orchestrator | 2026-01-03 01:58:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:58:59.102748 | orchestrator | 2026-01-03 01:58:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:58:59.102881 | orchestrator | 2026-01-03 01:58:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:02.153401 | orchestrator | 2026-01-03 01:59:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:59:02.154577 | orchestrator | 2026-01-03 01:59:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:59:02.154660 | orchestrator | 2026-01-03 01:59:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:05.211443 | orchestrator | 2026-01-03 01:59:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:59:05.212750 | orchestrator | 2026-01-03 01:59:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:59:05.212787 | orchestrator | 2026-01-03 01:59:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:08.270180 | orchestrator | 2026-01-03 01:59:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:59:08.271445 | orchestrator | 2026-01-03 01:59:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:59:08.271860 | orchestrator | 2026-01-03 01:59:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:11.327612 | orchestrator | 2026-01-03 01:59:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:59:11.328998 | orchestrator | 2026-01-03 01:59:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:59:11.329284 | orchestrator | 2026-01-03 01:59:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:14.395894 | orchestrator | 2026-01-03 01:59:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:59:14.396200 | orchestrator | 2026-01-03 01:59:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:59:14.396493 | orchestrator | 2026-01-03 01:59:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:17.471016 | orchestrator | 2026-01-03 01:59:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:59:17.474333 | orchestrator | 2026-01-03 01:59:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:59:17.474402 | orchestrator | 2026-01-03 01:59:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:20.546875 | orchestrator | 2026-01-03 01:59:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:59:20.549558 | orchestrator | 2026-01-03 01:59:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:59:20.549624 | orchestrator | 2026-01-03 01:59:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:23.590262 | orchestrator | 2026-01-03 01:59:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:59:23.593289 | orchestrator | 2026-01-03 01:59:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:59:23.593432 | orchestrator | 2026-01-03 01:59:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:26.646710 | orchestrator | 2026-01-03 01:59:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:59:26.649782 | orchestrator | 2026-01-03 01:59:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:59:26.649931 | orchestrator | 2026-01-03 01:59:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:29.705509 | orchestrator | 2026-01-03 01:59:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:59:29.710139 | orchestrator | 2026-01-03 01:59:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:59:29.710215 | orchestrator | 2026-01-03 01:59:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:32.768216 | orchestrator | 2026-01-03 01:59:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:59:32.770343 | orchestrator | 2026-01-03 01:59:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:59:32.770402 | orchestrator | 2026-01-03 01:59:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:35.822725 | orchestrator | 2026-01-03 01:59:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:59:35.825026 | orchestrator | 2026-01-03 01:59:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:59:35.825081 | orchestrator | 2026-01-03 01:59:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:38.878792 | orchestrator | 2026-01-03 01:59:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:59:38.879734 | orchestrator | 2026-01-03 01:59:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:59:38.879765 | orchestrator | 2026-01-03 01:59:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:41.928979 | orchestrator | 2026-01-03 01:59:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:59:41.929378 | orchestrator | 2026-01-03 01:59:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:59:41.929678 | orchestrator | 2026-01-03 01:59:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:44.982100 | orchestrator | 2026-01-03 01:59:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:59:44.985512 | orchestrator | 2026-01-03 01:59:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:59:44.985568 | orchestrator | 2026-01-03 01:59:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:48.043984 | orchestrator | 2026-01-03 01:59:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:59:48.046483 | orchestrator | 2026-01-03 01:59:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:59:48.047198 | orchestrator | 2026-01-03 01:59:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:51.099092 | orchestrator | 2026-01-03 01:59:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:59:51.101783 | orchestrator | 2026-01-03 01:59:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:59:51.101856 | orchestrator | 2026-01-03 01:59:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:54.150689 | orchestrator | 2026-01-03 01:59:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:59:54.152856 | orchestrator | 2026-01-03 01:59:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:59:54.152981 | orchestrator | 2026-01-03 01:59:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 01:59:57.202390 | orchestrator | 2026-01-03 01:59:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 01:59:57.205167 | orchestrator | 2026-01-03 01:59:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 01:59:57.205216 | orchestrator | 2026-01-03 01:59:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:00.255526 | orchestrator | 2026-01-03 02:00:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:00:00.257035 | orchestrator | 2026-01-03 02:00:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:00:00.257141 | orchestrator | 2026-01-03 02:00:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:03.303732 | orchestrator | 2026-01-03 02:00:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:00:03.306481 | orchestrator | 2026-01-03 02:00:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:00:03.306564 | orchestrator | 2026-01-03 02:00:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:06.358116 | orchestrator | 2026-01-03 02:00:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:00:06.360818 | orchestrator | 2026-01-03 02:00:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:00:06.360933 | orchestrator | 2026-01-03 02:00:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:09.412705 | orchestrator | 2026-01-03 02:00:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:00:09.415118 | orchestrator | 2026-01-03 02:00:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:00:09.415416 | orchestrator | 2026-01-03 02:00:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:12.470704 | orchestrator | 2026-01-03 02:00:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:00:12.472659 | orchestrator | 2026-01-03 02:00:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:00:12.472747 | orchestrator | 2026-01-03 02:00:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:15.521501 | orchestrator | 2026-01-03 02:00:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:00:15.523441 | orchestrator | 2026-01-03 02:00:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:00:15.523494 | orchestrator | 2026-01-03 02:00:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:18.564870 | orchestrator | 2026-01-03 02:00:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:00:18.565815 | orchestrator | 2026-01-03 02:00:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:00:18.565876 | orchestrator | 2026-01-03 02:00:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:21.610130 | orchestrator | 2026-01-03 02:00:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:00:21.612140 | orchestrator | 2026-01-03 02:00:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:00:21.612213 | orchestrator | 2026-01-03 02:00:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:24.658547 | orchestrator | 2026-01-03 02:00:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:00:24.660853 | orchestrator | 2026-01-03 02:00:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:00:24.660974 | orchestrator | 2026-01-03 02:00:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:27.706050 | orchestrator | 2026-01-03 02:00:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:00:27.707250 | orchestrator | 2026-01-03 02:00:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:00:27.707314 | orchestrator | 2026-01-03 02:00:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:30.756267 | orchestrator | 2026-01-03 02:00:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:00:30.757665 | orchestrator | 2026-01-03 02:00:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:00:30.757726 | orchestrator | 2026-01-03 02:00:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:33.804754 | orchestrator | 2026-01-03 02:00:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:00:33.806192 | orchestrator | 2026-01-03 02:00:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:00:33.806250 | orchestrator | 2026-01-03 02:00:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:36.851356 | orchestrator | 2026-01-03 02:00:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:00:36.853122 | orchestrator | 2026-01-03 02:00:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:00:36.853193 | orchestrator | 2026-01-03 02:00:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:39.897027 | orchestrator | 2026-01-03 02:00:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:00:39.898514 | orchestrator | 2026-01-03 02:00:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:00:39.898624 | orchestrator | 2026-01-03 02:00:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:42.946600 | orchestrator | 2026-01-03 02:00:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:00:42.947773 | orchestrator | 2026-01-03 02:00:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:00:42.947807 | orchestrator | 2026-01-03 02:00:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:45.998978 | orchestrator | 2026-01-03 02:00:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:00:46.000665 | orchestrator | 2026-01-03 02:00:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:00:46.000710 | orchestrator | 2026-01-03 02:00:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:49.050164 | orchestrator | 2026-01-03 02:00:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:00:49.051095 | orchestrator | 2026-01-03 02:00:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:00:49.051140 | orchestrator | 2026-01-03 02:00:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:52.097154 | orchestrator | 2026-01-03 02:00:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:00:52.098836 | orchestrator | 2026-01-03 02:00:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:00:52.098971 | orchestrator | 2026-01-03 02:00:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:55.142126 | orchestrator | 2026-01-03 02:00:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:00:55.143046 | orchestrator | 2026-01-03 02:00:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:00:55.143083 | orchestrator | 2026-01-03 02:00:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:00:58.189050 | orchestrator | 2026-01-03 02:00:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:00:58.191391 | orchestrator | 2026-01-03 02:00:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:00:58.191468 | orchestrator | 2026-01-03 02:00:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:01.240174 | orchestrator | 2026-01-03 02:01:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:01:01.241486 | orchestrator | 2026-01-03 02:01:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:01:01.241527 | orchestrator | 2026-01-03 02:01:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:04.293672 | orchestrator | 2026-01-03 02:01:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:01:04.295754 | orchestrator | 2026-01-03 02:01:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:01:04.295808 | orchestrator | 2026-01-03 02:01:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:07.344699 | orchestrator | 2026-01-03 02:01:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:01:07.347395 | orchestrator | 2026-01-03 02:01:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:01:07.347926 | orchestrator | 2026-01-03 02:01:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:10.399291 | orchestrator | 2026-01-03 02:01:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:01:10.403186 | orchestrator | 2026-01-03 02:01:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:01:10.403290 | orchestrator | 2026-01-03 02:01:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:13.441367 | orchestrator | 2026-01-03 02:01:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:01:13.442289 | orchestrator | 2026-01-03 02:01:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:01:13.442393 | orchestrator | 2026-01-03 02:01:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:16.488743 | orchestrator | 2026-01-03 02:01:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:01:16.491173 | orchestrator | 2026-01-03 02:01:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:01:16.491227 | orchestrator | 2026-01-03 02:01:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:19.533458 | orchestrator | 2026-01-03 02:01:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:01:19.535456 | orchestrator | 2026-01-03 02:01:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:01:19.535537 | orchestrator | 2026-01-03 02:01:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:22.582652 | orchestrator | 2026-01-03 02:01:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:01:22.584293 | orchestrator | 2026-01-03 02:01:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:01:22.584359 | orchestrator | 2026-01-03 02:01:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:25.631822 | orchestrator | 2026-01-03 02:01:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:01:25.633415 | orchestrator | 2026-01-03 02:01:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:01:25.633668 | orchestrator | 2026-01-03 02:01:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:28.682508 | orchestrator | 2026-01-03 02:01:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:01:28.684524 | orchestrator | 2026-01-03 02:01:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:01:28.684551 | orchestrator | 2026-01-03 02:01:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:31.730568 | orchestrator | 2026-01-03 02:01:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:01:31.732748 | orchestrator | 2026-01-03 02:01:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:01:31.732851 | orchestrator | 2026-01-03 02:01:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:34.776652 | orchestrator | 2026-01-03 02:01:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:01:34.778520 | orchestrator | 2026-01-03 02:01:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:01:34.778573 | orchestrator | 2026-01-03 02:01:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:37.823448 | orchestrator | 2026-01-03 02:01:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:01:37.824688 | orchestrator | 2026-01-03 02:01:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:01:37.824708 | orchestrator | 2026-01-03 02:01:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:40.871451 | orchestrator | 2026-01-03 02:01:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:01:40.872818 | orchestrator | 2026-01-03 02:01:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:01:40.872959 | orchestrator | 2026-01-03 02:01:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:43.920141 | orchestrator | 2026-01-03 02:01:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:01:43.921814 | orchestrator | 2026-01-03 02:01:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:01:43.921865 | orchestrator | 2026-01-03 02:01:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:46.969684 | orchestrator | 2026-01-03 02:01:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:01:46.971376 | orchestrator | 2026-01-03 02:01:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:01:46.971420 | orchestrator | 2026-01-03 02:01:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:50.035626 | orchestrator | 2026-01-03 02:01:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:01:50.036859 | orchestrator | 2026-01-03 02:01:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:01:50.036940 | orchestrator | 2026-01-03 02:01:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:53.086342 | orchestrator | 2026-01-03 02:01:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:01:53.088022 | orchestrator | 2026-01-03 02:01:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:01:53.088090 | orchestrator | 2026-01-03 02:01:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:56.123232 | orchestrator | 2026-01-03 02:01:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:01:56.124713 | orchestrator | 2026-01-03 02:01:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:01:56.124950 | orchestrator | 2026-01-03 02:01:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:01:59.175217 | orchestrator | 2026-01-03 02:01:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:01:59.176684 | orchestrator | 2026-01-03 02:01:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:01:59.176739 | orchestrator | 2026-01-03 02:01:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:02.226130 | orchestrator | 2026-01-03 02:02:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:02:02.228286 | orchestrator | 2026-01-03 02:02:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:02:02.228365 | orchestrator | 2026-01-03 02:02:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:05.279846 | orchestrator | 2026-01-03 02:02:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:02:05.280684 | orchestrator | 2026-01-03 02:02:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:02:05.280715 | orchestrator | 2026-01-03 02:02:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:08.328919 | orchestrator | 2026-01-03 02:02:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:02:08.330750 | orchestrator | 2026-01-03 02:02:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:02:08.330806 | orchestrator | 2026-01-03 02:02:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:11.381962 | orchestrator | 2026-01-03 02:02:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:02:11.383830 | orchestrator | 2026-01-03 02:02:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:02:11.383943 | orchestrator | 2026-01-03 02:02:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:14.425430 | orchestrator | 2026-01-03 02:02:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:02:14.425985 | orchestrator | 2026-01-03 02:02:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:02:14.426061 | orchestrator | 2026-01-03 02:02:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:17.473091 | orchestrator | 2026-01-03 02:02:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:02:17.475656 | orchestrator | 2026-01-03 02:02:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:02:17.475738 | orchestrator | 2026-01-03 02:02:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:20.521306 | orchestrator | 2026-01-03 02:02:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:02:20.522677 | orchestrator | 2026-01-03 02:02:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:02:20.522728 | orchestrator | 2026-01-03 02:02:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:23.572161 | orchestrator | 2026-01-03 02:02:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:02:23.572249 | orchestrator | 2026-01-03 02:02:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:02:23.572256 | orchestrator | 2026-01-03 02:02:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:26.624324 | orchestrator | 2026-01-03 02:02:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:02:26.625561 | orchestrator | 2026-01-03 02:02:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:02:26.625596 | orchestrator | 2026-01-03 02:02:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:29.674094 | orchestrator | 2026-01-03 02:02:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:02:29.675572 | orchestrator | 2026-01-03 02:02:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:02:29.675662 | orchestrator | 2026-01-03 02:02:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:32.721392 | orchestrator | 2026-01-03 02:02:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:02:32.722179 | orchestrator | 2026-01-03 02:02:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:02:32.722292 | orchestrator | 2026-01-03 02:02:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:35.768961 | orchestrator | 2026-01-03 02:02:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:02:35.770987 | orchestrator | 2026-01-03 02:02:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:02:35.771132 | orchestrator | 2026-01-03 02:02:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:38.818604 | orchestrator | 2026-01-03 02:02:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:02:38.820110 | orchestrator | 2026-01-03 02:02:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:02:38.820170 | orchestrator | 2026-01-03 02:02:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:41.869625 | orchestrator | 2026-01-03 02:02:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:02:41.871643 | orchestrator | 2026-01-03 02:02:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:02:41.871708 | orchestrator | 2026-01-03 02:02:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:44.915016 | orchestrator | 2026-01-03 02:02:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:02:44.916357 | orchestrator | 2026-01-03 02:02:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:02:44.916487 | orchestrator | 2026-01-03 02:02:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:47.958734 | orchestrator | 2026-01-03 02:02:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:02:47.961373 | orchestrator | 2026-01-03 02:02:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:02:47.961429 | orchestrator | 2026-01-03 02:02:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:51.009404 | orchestrator | 2026-01-03 02:02:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:02:51.010437 | orchestrator | 2026-01-03 02:02:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:02:51.010485 | orchestrator | 2026-01-03 02:02:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:54.056441 | orchestrator | 2026-01-03 02:02:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:02:54.059137 | orchestrator | 2026-01-03 02:02:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:02:54.059191 | orchestrator | 2026-01-03 02:02:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:02:57.103563 | orchestrator | 2026-01-03 02:02:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:02:57.104303 | orchestrator | 2026-01-03 02:02:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:02:57.104335 | orchestrator | 2026-01-03 02:02:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:00.155697 | orchestrator | 2026-01-03 02:03:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:03:00.157757 | orchestrator | 2026-01-03 02:03:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:03:00.157924 | orchestrator | 2026-01-03 02:03:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:03.207320 | orchestrator | 2026-01-03 02:03:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:03:03.209525 | orchestrator | 2026-01-03 02:03:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:03:03.209915 | orchestrator | 2026-01-03 02:03:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:06.261235 | orchestrator | 2026-01-03 02:03:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:03:06.263824 | orchestrator | 2026-01-03 02:03:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:03:06.263946 | orchestrator | 2026-01-03 02:03:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:09.313883 | orchestrator | 2026-01-03 02:03:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:03:09.315389 | orchestrator | 2026-01-03 02:03:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:03:09.315495 | orchestrator | 2026-01-03 02:03:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:12.366472 | orchestrator | 2026-01-03 02:03:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:03:12.367462 | orchestrator | 2026-01-03 02:03:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:03:12.367577 | orchestrator | 2026-01-03 02:03:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:15.417324 | orchestrator | 2026-01-03 02:03:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:03:15.419898 | orchestrator | 2026-01-03 02:03:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:03:15.419971 | orchestrator | 2026-01-03 02:03:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:18.470721 | orchestrator | 2026-01-03 02:03:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:03:18.472908 | orchestrator | 2026-01-03 02:03:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:03:18.472992 | orchestrator | 2026-01-03 02:03:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:21.514932 | orchestrator | 2026-01-03 02:03:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:03:21.516409 | orchestrator | 2026-01-03 02:03:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:03:21.516488 | orchestrator | 2026-01-03 02:03:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:24.561109 | orchestrator | 2026-01-03 02:03:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:03:24.562581 | orchestrator | 2026-01-03 02:03:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:03:24.562630 | orchestrator | 2026-01-03 02:03:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:27.615276 | orchestrator | 2026-01-03 02:03:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:03:27.617056 | orchestrator | 2026-01-03 02:03:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:03:27.617102 | orchestrator | 2026-01-03 02:03:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:30.667006 | orchestrator | 2026-01-03 02:03:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:03:30.668218 | orchestrator | 2026-01-03 02:03:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:03:30.668263 | orchestrator | 2026-01-03 02:03:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:33.720787 | orchestrator | 2026-01-03 02:03:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:03:33.722762 | orchestrator | 2026-01-03 02:03:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:03:33.722912 | orchestrator | 2026-01-03 02:03:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:36.773964 | orchestrator | 2026-01-03 02:03:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:03:36.777548 | orchestrator | 2026-01-03 02:03:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:03:36.778404 | orchestrator | 2026-01-03 02:03:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:39.830768 | orchestrator | 2026-01-03 02:03:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:03:39.832446 | orchestrator | 2026-01-03 02:03:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:03:39.832464 | orchestrator | 2026-01-03 02:03:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:42.889261 | orchestrator | 2026-01-03 02:03:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:03:42.890607 | orchestrator | 2026-01-03 02:03:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:03:42.890646 | orchestrator | 2026-01-03 02:03:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:45.946686 | orchestrator | 2026-01-03 02:03:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:03:45.949734 | orchestrator | 2026-01-03 02:03:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:03:45.949931 | orchestrator | 2026-01-03 02:03:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:49.000427 | orchestrator | 2026-01-03 02:03:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:03:49.012402 | orchestrator | 2026-01-03 02:03:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:03:49.012498 | orchestrator | 2026-01-03 02:03:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:52.057316 | orchestrator | 2026-01-03 02:03:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:03:52.059484 | orchestrator | 2026-01-03 02:03:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:03:52.059537 | orchestrator | 2026-01-03 02:03:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:55.101643 | orchestrator | 2026-01-03 02:03:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:03:55.101777 | orchestrator | 2026-01-03 02:03:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:03:55.101790 | orchestrator | 2026-01-03 02:03:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:03:58.154176 | orchestrator | 2026-01-03 02:03:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:03:58.156246 | orchestrator | 2026-01-03 02:03:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:03:58.156323 | orchestrator | 2026-01-03 02:03:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:01.210287 | orchestrator | 2026-01-03 02:04:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:04:01.212509 | orchestrator | 2026-01-03 02:04:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:04:01.212695 | orchestrator | 2026-01-03 02:04:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:04.261770 | orchestrator | 2026-01-03 02:04:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:04:04.263780 | orchestrator | 2026-01-03 02:04:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:04:04.263844 | orchestrator | 2026-01-03 02:04:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:07.312736 | orchestrator | 2026-01-03 02:04:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:04:07.314583 | orchestrator | 2026-01-03 02:04:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:04:07.314686 | orchestrator | 2026-01-03 02:04:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:10.362230 | orchestrator | 2026-01-03 02:04:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:04:10.364271 | orchestrator | 2026-01-03 02:04:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:04:10.364348 | orchestrator | 2026-01-03 02:04:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:13.420443 | orchestrator | 2026-01-03 02:04:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:04:13.421918 | orchestrator | 2026-01-03 02:04:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:04:13.422063 | orchestrator | 2026-01-03 02:04:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:16.478488 | orchestrator | 2026-01-03 02:04:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:04:16.480365 | orchestrator | 2026-01-03 02:04:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:04:16.480422 | orchestrator | 2026-01-03 02:04:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:19.532517 | orchestrator | 2026-01-03 02:04:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:04:19.534213 | orchestrator | 2026-01-03 02:04:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:04:19.534262 | orchestrator | 2026-01-03 02:04:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:22.581590 | orchestrator | 2026-01-03 02:04:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:04:22.584305 | orchestrator | 2026-01-03 02:04:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:04:22.584384 | orchestrator | 2026-01-03 02:04:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:25.632258 | orchestrator | 2026-01-03 02:04:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:04:25.634703 | orchestrator | 2026-01-03 02:04:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:04:25.634945 | orchestrator | 2026-01-03 02:04:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:28.683005 | orchestrator | 2026-01-03 02:04:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:04:28.684505 | orchestrator | 2026-01-03 02:04:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:04:28.684626 | orchestrator | 2026-01-03 02:04:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:31.739996 | orchestrator | 2026-01-03 02:04:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:04:31.740507 | orchestrator | 2026-01-03 02:04:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:04:31.740609 | orchestrator | 2026-01-03 02:04:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:34.786073 | orchestrator | 2026-01-03 02:04:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:04:34.788202 | orchestrator | 2026-01-03 02:04:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:04:34.788303 | orchestrator | 2026-01-03 02:04:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:37.837811 | orchestrator | 2026-01-03 02:04:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:04:37.838525 | orchestrator | 2026-01-03 02:04:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:04:37.838778 | orchestrator | 2026-01-03 02:04:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:40.884190 | orchestrator | 2026-01-03 02:04:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:04:40.885532 | orchestrator | 2026-01-03 02:04:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:04:40.885629 | orchestrator | 2026-01-03 02:04:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:43.929978 | orchestrator | 2026-01-03 02:04:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:04:43.931317 | orchestrator | 2026-01-03 02:04:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:04:43.931363 | orchestrator | 2026-01-03 02:04:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:46.988002 | orchestrator | 2026-01-03 02:04:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:04:46.990451 | orchestrator | 2026-01-03 02:04:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:04:46.990499 | orchestrator | 2026-01-03 02:04:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:50.038800 | orchestrator | 2026-01-03 02:04:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:04:50.040367 | orchestrator | 2026-01-03 02:04:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:04:50.040572 | orchestrator | 2026-01-03 02:04:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:53.086316 | orchestrator | 2026-01-03 02:04:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:04:53.087436 | orchestrator | 2026-01-03 02:04:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:04:53.087509 | orchestrator | 2026-01-03 02:04:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:56.138946 | orchestrator | 2026-01-03 02:04:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:04:56.140328 | orchestrator | 2026-01-03 02:04:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:04:56.140384 | orchestrator | 2026-01-03 02:04:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:04:59.194409 | orchestrator | 2026-01-03 02:04:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:04:59.196433 | orchestrator | 2026-01-03 02:04:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:04:59.196481 | orchestrator | 2026-01-03 02:04:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:02.246305 | orchestrator | 2026-01-03 02:05:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:05:02.248370 | orchestrator | 2026-01-03 02:05:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:05:02.248475 | orchestrator | 2026-01-03 02:05:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:05.296436 | orchestrator | 2026-01-03 02:05:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:05:05.297220 | orchestrator | 2026-01-03 02:05:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:05:05.297276 | orchestrator | 2026-01-03 02:05:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:08.349951 | orchestrator | 2026-01-03 02:05:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:05:08.352607 | orchestrator | 2026-01-03 02:05:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:05:08.352677 | orchestrator | 2026-01-03 02:05:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:11.395942 | orchestrator | 2026-01-03 02:05:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:05:11.397407 | orchestrator | 2026-01-03 02:05:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:05:11.397432 | orchestrator | 2026-01-03 02:05:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:14.444171 | orchestrator | 2026-01-03 02:05:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:05:14.446190 | orchestrator | 2026-01-03 02:05:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:05:14.446388 | orchestrator | 2026-01-03 02:05:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:17.501223 | orchestrator | 2026-01-03 02:05:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:05:17.502009 | orchestrator | 2026-01-03 02:05:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:05:17.502108 | orchestrator | 2026-01-03 02:05:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:20.551402 | orchestrator | 2026-01-03 02:05:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:05:20.552254 | orchestrator | 2026-01-03 02:05:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:05:20.553457 | orchestrator | 2026-01-03 02:05:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:23.601514 | orchestrator | 2026-01-03 02:05:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:05:23.603286 | orchestrator | 2026-01-03 02:05:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:05:23.603332 | orchestrator | 2026-01-03 02:05:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:26.663492 | orchestrator | 2026-01-03 02:05:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:05:26.665100 | orchestrator | 2026-01-03 02:05:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:05:26.665153 | orchestrator | 2026-01-03 02:05:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:29.716806 | orchestrator | 2026-01-03 02:05:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:05:29.718833 | orchestrator | 2026-01-03 02:05:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:05:29.718899 | orchestrator | 2026-01-03 02:05:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:32.768321 | orchestrator | 2026-01-03 02:05:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:05:32.768715 | orchestrator | 2026-01-03 02:05:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:05:32.768741 | orchestrator | 2026-01-03 02:05:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:35.824815 | orchestrator | 2026-01-03 02:05:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:05:35.827289 | orchestrator | 2026-01-03 02:05:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:05:35.827364 | orchestrator | 2026-01-03 02:05:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:38.877337 | orchestrator | 2026-01-03 02:05:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:05:38.879134 | orchestrator | 2026-01-03 02:05:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:05:38.879212 | orchestrator | 2026-01-03 02:05:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:41.929893 | orchestrator | 2026-01-03 02:05:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:05:41.932516 | orchestrator | 2026-01-03 02:05:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:05:41.932621 | orchestrator | 2026-01-03 02:05:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:44.977652 | orchestrator | 2026-01-03 02:05:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:05:44.979339 | orchestrator | 2026-01-03 02:05:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:05:44.979394 | orchestrator | 2026-01-03 02:05:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:48.032125 | orchestrator | 2026-01-03 02:05:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:05:48.034388 | orchestrator | 2026-01-03 02:05:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:05:48.034519 | orchestrator | 2026-01-03 02:05:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:51.083636 | orchestrator | 2026-01-03 02:05:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:05:51.085387 | orchestrator | 2026-01-03 02:05:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:05:51.085619 | orchestrator | 2026-01-03 02:05:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:54.127995 | orchestrator | 2026-01-03 02:05:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:05:54.130093 | orchestrator | 2026-01-03 02:05:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:05:54.130142 | orchestrator | 2026-01-03 02:05:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:05:57.170455 | orchestrator | 2026-01-03 02:05:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:05:57.173460 | orchestrator | 2026-01-03 02:05:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:05:57.173530 | orchestrator | 2026-01-03 02:05:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:00.214633 | orchestrator | 2026-01-03 02:06:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:06:00.217646 | orchestrator | 2026-01-03 02:06:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:06:00.217714 | orchestrator | 2026-01-03 02:06:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:03.266210 | orchestrator | 2026-01-03 02:06:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:06:03.267673 | orchestrator | 2026-01-03 02:06:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:06:03.267734 | orchestrator | 2026-01-03 02:06:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:06.321820 | orchestrator | 2026-01-03 02:06:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:06:06.323299 | orchestrator | 2026-01-03 02:06:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:06:06.323348 | orchestrator | 2026-01-03 02:06:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:09.366432 | orchestrator | 2026-01-03 02:06:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:06:09.370219 | orchestrator | 2026-01-03 02:06:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:06:09.370291 | orchestrator | 2026-01-03 02:06:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:12.415491 | orchestrator | 2026-01-03 02:06:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:06:12.416939 | orchestrator | 2026-01-03 02:06:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:06:12.417067 | orchestrator | 2026-01-03 02:06:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:15.456349 | orchestrator | 2026-01-03 02:06:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:06:15.457507 | orchestrator | 2026-01-03 02:06:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:06:15.457586 | orchestrator | 2026-01-03 02:06:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:18.509225 | orchestrator | 2026-01-03 02:06:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:06:18.510692 | orchestrator | 2026-01-03 02:06:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:06:18.510781 | orchestrator | 2026-01-03 02:06:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:21.591816 | orchestrator | 2026-01-03 02:06:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:06:21.593358 | orchestrator | 2026-01-03 02:06:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:06:21.593408 | orchestrator | 2026-01-03 02:06:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:24.643414 | orchestrator | 2026-01-03 02:06:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:06:24.645501 | orchestrator | 2026-01-03 02:06:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:06:24.645557 | orchestrator | 2026-01-03 02:06:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:27.698898 | orchestrator | 2026-01-03 02:06:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:06:27.701149 | orchestrator | 2026-01-03 02:06:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:06:27.701227 | orchestrator | 2026-01-03 02:06:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:30.749692 | orchestrator | 2026-01-03 02:06:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:06:30.750898 | orchestrator | 2026-01-03 02:06:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:06:30.750961 | orchestrator | 2026-01-03 02:06:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:33.801980 | orchestrator | 2026-01-03 02:06:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:06:33.804331 | orchestrator | 2026-01-03 02:06:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:06:33.804394 | orchestrator | 2026-01-03 02:06:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:36.851094 | orchestrator | 2026-01-03 02:06:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:06:36.852361 | orchestrator | 2026-01-03 02:06:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:06:36.852414 | orchestrator | 2026-01-03 02:06:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:39.903069 | orchestrator | 2026-01-03 02:06:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:06:39.904222 | orchestrator | 2026-01-03 02:06:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:06:39.904304 | orchestrator | 2026-01-03 02:06:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:42.939726 | orchestrator | 2026-01-03 02:06:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:06:42.941738 | orchestrator | 2026-01-03 02:06:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:06:42.941801 | orchestrator | 2026-01-03 02:06:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:45.990565 | orchestrator | 2026-01-03 02:06:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:06:45.992573 | orchestrator | 2026-01-03 02:06:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:06:45.992635 | orchestrator | 2026-01-03 02:06:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:49.039409 | orchestrator | 2026-01-03 02:06:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:06:49.039504 | orchestrator | 2026-01-03 02:06:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:06:49.039511 | orchestrator | 2026-01-03 02:06:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:52.081946 | orchestrator | 2026-01-03 02:06:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:06:52.083433 | orchestrator | 2026-01-03 02:06:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:06:52.084066 | orchestrator | 2026-01-03 02:06:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:55.126110 | orchestrator | 2026-01-03 02:06:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:06:55.126771 | orchestrator | 2026-01-03 02:06:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:06:55.126912 | orchestrator | 2026-01-03 02:06:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:06:58.174325 | orchestrator | 2026-01-03 02:06:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:06:58.176365 | orchestrator | 2026-01-03 02:06:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:06:58.176409 | orchestrator | 2026-01-03 02:06:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:01.227564 | orchestrator | 2026-01-03 02:07:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:07:01.230425 | orchestrator | 2026-01-03 02:07:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:07:01.230526 | orchestrator | 2026-01-03 02:07:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:04.282312 | orchestrator | 2026-01-03 02:07:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:07:04.284332 | orchestrator | 2026-01-03 02:07:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:07:04.284418 | orchestrator | 2026-01-03 02:07:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:07.327542 | orchestrator | 2026-01-03 02:07:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:07:07.329573 | orchestrator | 2026-01-03 02:07:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:07:07.330314 | orchestrator | 2026-01-03 02:07:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:10.383472 | orchestrator | 2026-01-03 02:07:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:07:10.385756 | orchestrator | 2026-01-03 02:07:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:07:10.385839 | orchestrator | 2026-01-03 02:07:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:13.433513 | orchestrator | 2026-01-03 02:07:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:07:13.436513 | orchestrator | 2026-01-03 02:07:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:07:13.436585 | orchestrator | 2026-01-03 02:07:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:16.481345 | orchestrator | 2026-01-03 02:07:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:07:16.483636 | orchestrator | 2026-01-03 02:07:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:07:16.483797 | orchestrator | 2026-01-03 02:07:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:19.534569 | orchestrator | 2026-01-03 02:07:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:07:19.536853 | orchestrator | 2026-01-03 02:07:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:07:19.536898 | orchestrator | 2026-01-03 02:07:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:22.588727 | orchestrator | 2026-01-03 02:07:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:07:22.590429 | orchestrator | 2026-01-03 02:07:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:07:22.590609 | orchestrator | 2026-01-03 02:07:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:25.642011 | orchestrator | 2026-01-03 02:07:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:07:25.643928 | orchestrator | 2026-01-03 02:07:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:07:25.644112 | orchestrator | 2026-01-03 02:07:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:28.709520 | orchestrator | 2026-01-03 02:07:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:07:28.711619 | orchestrator | 2026-01-03 02:07:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:07:28.711671 | orchestrator | 2026-01-03 02:07:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:31.761832 | orchestrator | 2026-01-03 02:07:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:07:31.764003 | orchestrator | 2026-01-03 02:07:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:07:31.764117 | orchestrator | 2026-01-03 02:07:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:34.809377 | orchestrator | 2026-01-03 02:07:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:07:34.813365 | orchestrator | 2026-01-03 02:07:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:07:34.813440 | orchestrator | 2026-01-03 02:07:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:37.862616 | orchestrator | 2026-01-03 02:07:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:07:37.864003 | orchestrator | 2026-01-03 02:07:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:07:37.864152 | orchestrator | 2026-01-03 02:07:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:40.914947 | orchestrator | 2026-01-03 02:07:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:07:40.916368 | orchestrator | 2026-01-03 02:07:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:07:40.916620 | orchestrator | 2026-01-03 02:07:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:43.962360 | orchestrator | 2026-01-03 02:07:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:07:43.963512 | orchestrator | 2026-01-03 02:07:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:07:43.963566 | orchestrator | 2026-01-03 02:07:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:47.013564 | orchestrator | 2026-01-03 02:07:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:07:47.015575 | orchestrator | 2026-01-03 02:07:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:07:47.015700 | orchestrator | 2026-01-03 02:07:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:50.062891 | orchestrator | 2026-01-03 02:07:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:07:50.064482 | orchestrator | 2026-01-03 02:07:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:07:50.064575 | orchestrator | 2026-01-03 02:07:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:53.111880 | orchestrator | 2026-01-03 02:07:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:07:53.112579 | orchestrator | 2026-01-03 02:07:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:07:53.112612 | orchestrator | 2026-01-03 02:07:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:56.157768 | orchestrator | 2026-01-03 02:07:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:07:56.159376 | orchestrator | 2026-01-03 02:07:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:07:56.159461 | orchestrator | 2026-01-03 02:07:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:07:59.208665 | orchestrator | 2026-01-03 02:07:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:07:59.211158 | orchestrator | 2026-01-03 02:07:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:07:59.211262 | orchestrator | 2026-01-03 02:07:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:02.260847 | orchestrator | 2026-01-03 02:08:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:08:02.262369 | orchestrator | 2026-01-03 02:08:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:08:02.262415 | orchestrator | 2026-01-03 02:08:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:05.310511 | orchestrator | 2026-01-03 02:08:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:08:05.312193 | orchestrator | 2026-01-03 02:08:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:08:05.312260 | orchestrator | 2026-01-03 02:08:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:08.361408 | orchestrator | 2026-01-03 02:08:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:08:08.363717 | orchestrator | 2026-01-03 02:08:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:08:08.363767 | orchestrator | 2026-01-03 02:08:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:11.414413 | orchestrator | 2026-01-03 02:08:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:08:11.415447 | orchestrator | 2026-01-03 02:08:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:08:11.415486 | orchestrator | 2026-01-03 02:08:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:14.462457 | orchestrator | 2026-01-03 02:08:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:08:14.464814 | orchestrator | 2026-01-03 02:08:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:08:14.464876 | orchestrator | 2026-01-03 02:08:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:17.506502 | orchestrator | 2026-01-03 02:08:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:08:17.507604 | orchestrator | 2026-01-03 02:08:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:08:17.507636 | orchestrator | 2026-01-03 02:08:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:20.558778 | orchestrator | 2026-01-03 02:08:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:08:20.560005 | orchestrator | 2026-01-03 02:08:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:08:20.560046 | orchestrator | 2026-01-03 02:08:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:23.615078 | orchestrator | 2026-01-03 02:08:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:08:23.617851 | orchestrator | 2026-01-03 02:08:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:08:23.617911 | orchestrator | 2026-01-03 02:08:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:26.665802 | orchestrator | 2026-01-03 02:08:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:08:26.667341 | orchestrator | 2026-01-03 02:08:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:08:26.667423 | orchestrator | 2026-01-03 02:08:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:29.719670 | orchestrator | 2026-01-03 02:08:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:08:29.721828 | orchestrator | 2026-01-03 02:08:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:08:29.721903 | orchestrator | 2026-01-03 02:08:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:32.775834 | orchestrator | 2026-01-03 02:08:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:08:32.777323 | orchestrator | 2026-01-03 02:08:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:08:32.777386 | orchestrator | 2026-01-03 02:08:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:35.825752 | orchestrator | 2026-01-03 02:08:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:08:35.829503 | orchestrator | 2026-01-03 02:08:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:08:35.829581 | orchestrator | 2026-01-03 02:08:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:38.875769 | orchestrator | 2026-01-03 02:08:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:08:38.877874 | orchestrator | 2026-01-03 02:08:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:08:38.877978 | orchestrator | 2026-01-03 02:08:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:41.927200 | orchestrator | 2026-01-03 02:08:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:08:41.928682 | orchestrator | 2026-01-03 02:08:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:08:41.928725 | orchestrator | 2026-01-03 02:08:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:44.974811 | orchestrator | 2026-01-03 02:08:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:08:44.976577 | orchestrator | 2026-01-03 02:08:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:08:44.976720 | orchestrator | 2026-01-03 02:08:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:48.023673 | orchestrator | 2026-01-03 02:08:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:08:48.025377 | orchestrator | 2026-01-03 02:08:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:08:48.025472 | orchestrator | 2026-01-03 02:08:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:51.071290 | orchestrator | 2026-01-03 02:08:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:08:51.072797 | orchestrator | 2026-01-03 02:08:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:08:51.072838 | orchestrator | 2026-01-03 02:08:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:54.117099 | orchestrator | 2026-01-03 02:08:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:08:54.119538 | orchestrator | 2026-01-03 02:08:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:08:54.119588 | orchestrator | 2026-01-03 02:08:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:08:57.165574 | orchestrator | 2026-01-03 02:08:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:08:57.167596 | orchestrator | 2026-01-03 02:08:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:08:57.167663 | orchestrator | 2026-01-03 02:08:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:00.221532 | orchestrator | 2026-01-03 02:09:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:09:00.225298 | orchestrator | 2026-01-03 02:09:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:09:00.225421 | orchestrator | 2026-01-03 02:09:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:03.269382 | orchestrator | 2026-01-03 02:09:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:09:03.271426 | orchestrator | 2026-01-03 02:09:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:09:03.271487 | orchestrator | 2026-01-03 02:09:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:06.318214 | orchestrator | 2026-01-03 02:09:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:09:06.321090 | orchestrator | 2026-01-03 02:09:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:09:06.321252 | orchestrator | 2026-01-03 02:09:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:09.370432 | orchestrator | 2026-01-03 02:09:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:09:09.371637 | orchestrator | 2026-01-03 02:09:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:09:09.371813 | orchestrator | 2026-01-03 02:09:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:12.421375 | orchestrator | 2026-01-03 02:09:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:09:12.424104 | orchestrator | 2026-01-03 02:09:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:09:12.424246 | orchestrator | 2026-01-03 02:09:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:15.473284 | orchestrator | 2026-01-03 02:09:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:09:15.474789 | orchestrator | 2026-01-03 02:09:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:09:15.474848 | orchestrator | 2026-01-03 02:09:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:18.525851 | orchestrator | 2026-01-03 02:09:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:09:18.527058 | orchestrator | 2026-01-03 02:09:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:09:18.527113 | orchestrator | 2026-01-03 02:09:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:21.573875 | orchestrator | 2026-01-03 02:09:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:09:21.575554 | orchestrator | 2026-01-03 02:09:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:09:21.575640 | orchestrator | 2026-01-03 02:09:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:24.621075 | orchestrator | 2026-01-03 02:09:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:09:24.622980 | orchestrator | 2026-01-03 02:09:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:09:24.623023 | orchestrator | 2026-01-03 02:09:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:27.660812 | orchestrator | 2026-01-03 02:09:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:09:27.662923 | orchestrator | 2026-01-03 02:09:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:09:27.662972 | orchestrator | 2026-01-03 02:09:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:30.712765 | orchestrator | 2026-01-03 02:09:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:09:30.714955 | orchestrator | 2026-01-03 02:09:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:09:30.715028 | orchestrator | 2026-01-03 02:09:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:33.761116 | orchestrator | 2026-01-03 02:09:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:09:33.762875 | orchestrator | 2026-01-03 02:09:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:09:33.762978 | orchestrator | 2026-01-03 02:09:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:36.816615 | orchestrator | 2026-01-03 02:09:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:09:36.819769 | orchestrator | 2026-01-03 02:09:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:09:36.895682 | orchestrator | 2026-01-03 02:09:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:39.871051 | orchestrator | 2026-01-03 02:09:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:09:39.874267 | orchestrator | 2026-01-03 02:09:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:09:39.874351 | orchestrator | 2026-01-03 02:09:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:42.927679 | orchestrator | 2026-01-03 02:09:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:09:42.929448 | orchestrator | 2026-01-03 02:09:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:09:42.929499 | orchestrator | 2026-01-03 02:09:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:45.978836 | orchestrator | 2026-01-03 02:09:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:09:45.982083 | orchestrator | 2026-01-03 02:09:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:09:45.982257 | orchestrator | 2026-01-03 02:09:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:49.027498 | orchestrator | 2026-01-03 02:09:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:09:49.029861 | orchestrator | 2026-01-03 02:09:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:09:49.029913 | orchestrator | 2026-01-03 02:09:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:52.077706 | orchestrator | 2026-01-03 02:09:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:09:52.077948 | orchestrator | 2026-01-03 02:09:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:09:52.077963 | orchestrator | 2026-01-03 02:09:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:55.118331 | orchestrator | 2026-01-03 02:09:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:09:55.120058 | orchestrator | 2026-01-03 02:09:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:09:55.120125 | orchestrator | 2026-01-03 02:09:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:09:58.167965 | orchestrator | 2026-01-03 02:09:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:09:58.173598 | orchestrator | 2026-01-03 02:09:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:09:58.174502 | orchestrator | 2026-01-03 02:09:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:01.227934 | orchestrator | 2026-01-03 02:10:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:10:01.230275 | orchestrator | 2026-01-03 02:10:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:10:01.230349 | orchestrator | 2026-01-03 02:10:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:04.279743 | orchestrator | 2026-01-03 02:10:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:10:04.281785 | orchestrator | 2026-01-03 02:10:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:10:04.281847 | orchestrator | 2026-01-03 02:10:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:07.328318 | orchestrator | 2026-01-03 02:10:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:10:07.329964 | orchestrator | 2026-01-03 02:10:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:10:07.330059 | orchestrator | 2026-01-03 02:10:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:10.378162 | orchestrator | 2026-01-03 02:10:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:10:10.380369 | orchestrator | 2026-01-03 02:10:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:10:10.380525 | orchestrator | 2026-01-03 02:10:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:13.428217 | orchestrator | 2026-01-03 02:10:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:10:13.430272 | orchestrator | 2026-01-03 02:10:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:10:13.430313 | orchestrator | 2026-01-03 02:10:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:16.480531 | orchestrator | 2026-01-03 02:10:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:10:16.482839 | orchestrator | 2026-01-03 02:10:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:10:16.482900 | orchestrator | 2026-01-03 02:10:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:19.524713 | orchestrator | 2026-01-03 02:10:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:10:19.525169 | orchestrator | 2026-01-03 02:10:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:10:19.525275 | orchestrator | 2026-01-03 02:10:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:22.563659 | orchestrator | 2026-01-03 02:10:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:10:22.565596 | orchestrator | 2026-01-03 02:10:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:10:22.565647 | orchestrator | 2026-01-03 02:10:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:25.613817 | orchestrator | 2026-01-03 02:10:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:10:25.615354 | orchestrator | 2026-01-03 02:10:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:10:25.615406 | orchestrator | 2026-01-03 02:10:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:28.671926 | orchestrator | 2026-01-03 02:10:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:10:28.673700 | orchestrator | 2026-01-03 02:10:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:10:28.673961 | orchestrator | 2026-01-03 02:10:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:31.727440 | orchestrator | 2026-01-03 02:10:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:10:31.729016 | orchestrator | 2026-01-03 02:10:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:10:31.729076 | orchestrator | 2026-01-03 02:10:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:34.779141 | orchestrator | 2026-01-03 02:10:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:10:34.782353 | orchestrator | 2026-01-03 02:10:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:10:34.782445 | orchestrator | 2026-01-03 02:10:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:37.837253 | orchestrator | 2026-01-03 02:10:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:10:37.839083 | orchestrator | 2026-01-03 02:10:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:10:37.839175 | orchestrator | 2026-01-03 02:10:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:40.894285 | orchestrator | 2026-01-03 02:10:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:10:40.896098 | orchestrator | 2026-01-03 02:10:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:10:40.896321 | orchestrator | 2026-01-03 02:10:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:43.949772 | orchestrator | 2026-01-03 02:10:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:10:43.951851 | orchestrator | 2026-01-03 02:10:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:10:43.951944 | orchestrator | 2026-01-03 02:10:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:47.009805 | orchestrator | 2026-01-03 02:10:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:10:47.012740 | orchestrator | 2026-01-03 02:10:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:10:47.012838 | orchestrator | 2026-01-03 02:10:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:50.058325 | orchestrator | 2026-01-03 02:10:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:10:50.060715 | orchestrator | 2026-01-03 02:10:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:10:50.060778 | orchestrator | 2026-01-03 02:10:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:53.098246 | orchestrator | 2026-01-03 02:10:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:10:53.098399 | orchestrator | 2026-01-03 02:10:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:10:53.098415 | orchestrator | 2026-01-03 02:10:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:56.144401 | orchestrator | 2026-01-03 02:10:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:10:56.146364 | orchestrator | 2026-01-03 02:10:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:10:56.146422 | orchestrator | 2026-01-03 02:10:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:10:59.199864 | orchestrator | 2026-01-03 02:10:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:10:59.201471 | orchestrator | 2026-01-03 02:10:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:10:59.201560 | orchestrator | 2026-01-03 02:10:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:02.255172 | orchestrator | 2026-01-03 02:11:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:11:02.257577 | orchestrator | 2026-01-03 02:11:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:11:02.257690 | orchestrator | 2026-01-03 02:11:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:05.301165 | orchestrator | 2026-01-03 02:11:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:11:05.302523 | orchestrator | 2026-01-03 02:11:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:11:05.302678 | orchestrator | 2026-01-03 02:11:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:08.348954 | orchestrator | 2026-01-03 02:11:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:11:08.351116 | orchestrator | 2026-01-03 02:11:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:11:08.351297 | orchestrator | 2026-01-03 02:11:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:11.403251 | orchestrator | 2026-01-03 02:11:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:11:11.405526 | orchestrator | 2026-01-03 02:11:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:11:11.405599 | orchestrator | 2026-01-03 02:11:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:14.452766 | orchestrator | 2026-01-03 02:11:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:11:14.455680 | orchestrator | 2026-01-03 02:11:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:11:14.455764 | orchestrator | 2026-01-03 02:11:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:17.510980 | orchestrator | 2026-01-03 02:11:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:11:17.512745 | orchestrator | 2026-01-03 02:11:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:11:17.512867 | orchestrator | 2026-01-03 02:11:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:20.559669 | orchestrator | 2026-01-03 02:11:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:11:20.561633 | orchestrator | 2026-01-03 02:11:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:11:20.561716 | orchestrator | 2026-01-03 02:11:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:23.608140 | orchestrator | 2026-01-03 02:11:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:11:23.609194 | orchestrator | 2026-01-03 02:11:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:11:23.609297 | orchestrator | 2026-01-03 02:11:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:26.656588 | orchestrator | 2026-01-03 02:11:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:11:26.659807 | orchestrator | 2026-01-03 02:11:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:11:26.659870 | orchestrator | 2026-01-03 02:11:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:29.707700 | orchestrator | 2026-01-03 02:11:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:11:29.709492 | orchestrator | 2026-01-03 02:11:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:11:29.709552 | orchestrator | 2026-01-03 02:11:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:32.766531 | orchestrator | 2026-01-03 02:11:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:11:32.768577 | orchestrator | 2026-01-03 02:11:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:11:32.768629 | orchestrator | 2026-01-03 02:11:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:35.819112 | orchestrator | 2026-01-03 02:11:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:11:35.821149 | orchestrator | 2026-01-03 02:11:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:11:35.821233 | orchestrator | 2026-01-03 02:11:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:38.868970 | orchestrator | 2026-01-03 02:11:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:11:38.870830 | orchestrator | 2026-01-03 02:11:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:11:38.870890 | orchestrator | 2026-01-03 02:11:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:41.920905 | orchestrator | 2026-01-03 02:11:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:11:41.924568 | orchestrator | 2026-01-03 02:11:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:11:41.924672 | orchestrator | 2026-01-03 02:11:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:44.970188 | orchestrator | 2026-01-03 02:11:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:11:44.971889 | orchestrator | 2026-01-03 02:11:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:11:44.972185 | orchestrator | 2026-01-03 02:11:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:48.023131 | orchestrator | 2026-01-03 02:11:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:11:48.024550 | orchestrator | 2026-01-03 02:11:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:11:48.024654 | orchestrator | 2026-01-03 02:11:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:51.065275 | orchestrator | 2026-01-03 02:11:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:11:51.067060 | orchestrator | 2026-01-03 02:11:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:11:51.067271 | orchestrator | 2026-01-03 02:11:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:54.112805 | orchestrator | 2026-01-03 02:11:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:11:54.115658 | orchestrator | 2026-01-03 02:11:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:11:54.115739 | orchestrator | 2026-01-03 02:11:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:11:57.165007 | orchestrator | 2026-01-03 02:11:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:11:57.166590 | orchestrator | 2026-01-03 02:11:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:11:57.166696 | orchestrator | 2026-01-03 02:11:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:00.218810 | orchestrator | 2026-01-03 02:12:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:12:00.221896 | orchestrator | 2026-01-03 02:12:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:12:00.221950 | orchestrator | 2026-01-03 02:12:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:03.283492 | orchestrator | 2026-01-03 02:12:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:12:03.284846 | orchestrator | 2026-01-03 02:12:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:12:03.285452 | orchestrator | 2026-01-03 02:12:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:06.335351 | orchestrator | 2026-01-03 02:12:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:12:06.336693 | orchestrator | 2026-01-03 02:12:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:12:06.336771 | orchestrator | 2026-01-03 02:12:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:09.388671 | orchestrator | 2026-01-03 02:12:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:12:09.391059 | orchestrator | 2026-01-03 02:12:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:12:09.391183 | orchestrator | 2026-01-03 02:12:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:12.438694 | orchestrator | 2026-01-03 02:12:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:12:12.440794 | orchestrator | 2026-01-03 02:12:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:12:12.440866 | orchestrator | 2026-01-03 02:12:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:15.482602 | orchestrator | 2026-01-03 02:12:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:12:15.484114 | orchestrator | 2026-01-03 02:12:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:12:15.484149 | orchestrator | 2026-01-03 02:12:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:18.537038 | orchestrator | 2026-01-03 02:12:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:12:18.539740 | orchestrator | 2026-01-03 02:12:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:12:18.539914 | orchestrator | 2026-01-03 02:12:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:21.586185 | orchestrator | 2026-01-03 02:12:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:12:21.588191 | orchestrator | 2026-01-03 02:12:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:12:21.588300 | orchestrator | 2026-01-03 02:12:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:24.633203 | orchestrator | 2026-01-03 02:12:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:12:24.635095 | orchestrator | 2026-01-03 02:12:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:12:24.635144 | orchestrator | 2026-01-03 02:12:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:27.680343 | orchestrator | 2026-01-03 02:12:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:12:27.682130 | orchestrator | 2026-01-03 02:12:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:12:27.682175 | orchestrator | 2026-01-03 02:12:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:30.726677 | orchestrator | 2026-01-03 02:12:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:12:30.728166 | orchestrator | 2026-01-03 02:12:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:12:30.728226 | orchestrator | 2026-01-03 02:12:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:33.777038 | orchestrator | 2026-01-03 02:12:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:12:33.779955 | orchestrator | 2026-01-03 02:12:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:12:33.780070 | orchestrator | 2026-01-03 02:12:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:36.826392 | orchestrator | 2026-01-03 02:12:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:12:36.828407 | orchestrator | 2026-01-03 02:12:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:12:36.828806 | orchestrator | 2026-01-03 02:12:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:39.875989 | orchestrator | 2026-01-03 02:12:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:12:39.878922 | orchestrator | 2026-01-03 02:12:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:12:39.878966 | orchestrator | 2026-01-03 02:12:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:42.927886 | orchestrator | 2026-01-03 02:12:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:12:42.928878 | orchestrator | 2026-01-03 02:12:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:12:42.928949 | orchestrator | 2026-01-03 02:12:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:45.979057 | orchestrator | 2026-01-03 02:12:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:12:45.980740 | orchestrator | 2026-01-03 02:12:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:12:45.980801 | orchestrator | 2026-01-03 02:12:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:49.035818 | orchestrator | 2026-01-03 02:12:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:12:49.035906 | orchestrator | 2026-01-03 02:12:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:12:49.035915 | orchestrator | 2026-01-03 02:12:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:52.083417 | orchestrator | 2026-01-03 02:12:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:12:52.085768 | orchestrator | 2026-01-03 02:12:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:12:52.085959 | orchestrator | 2026-01-03 02:12:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:55.124027 | orchestrator | 2026-01-03 02:12:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:12:55.125404 | orchestrator | 2026-01-03 02:12:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:12:55.125487 | orchestrator | 2026-01-03 02:12:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:12:58.160206 | orchestrator | 2026-01-03 02:12:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:12:58.160623 | orchestrator | 2026-01-03 02:12:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:12:58.161224 | orchestrator | 2026-01-03 02:12:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:01.209391 | orchestrator | 2026-01-03 02:13:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:13:01.211604 | orchestrator | 2026-01-03 02:13:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:13:01.211673 | orchestrator | 2026-01-03 02:13:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:04.264863 | orchestrator | 2026-01-03 02:13:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:13:04.266764 | orchestrator | 2026-01-03 02:13:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:13:04.267055 | orchestrator | 2026-01-03 02:13:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:07.309708 | orchestrator | 2026-01-03 02:13:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:13:07.311040 | orchestrator | 2026-01-03 02:13:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:13:07.311278 | orchestrator | 2026-01-03 02:13:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:10.355358 | orchestrator | 2026-01-03 02:13:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:13:10.357389 | orchestrator | 2026-01-03 02:13:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:13:10.357465 | orchestrator | 2026-01-03 02:13:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:13.408121 | orchestrator | 2026-01-03 02:13:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:13:13.409543 | orchestrator | 2026-01-03 02:13:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:13:13.409623 | orchestrator | 2026-01-03 02:13:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:16.456749 | orchestrator | 2026-01-03 02:13:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:13:16.458661 | orchestrator | 2026-01-03 02:13:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:13:16.458724 | orchestrator | 2026-01-03 02:13:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:19.509191 | orchestrator | 2026-01-03 02:13:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:13:19.510634 | orchestrator | 2026-01-03 02:13:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:13:19.510706 | orchestrator | 2026-01-03 02:13:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:22.556780 | orchestrator | 2026-01-03 02:13:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:13:22.559168 | orchestrator | 2026-01-03 02:13:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:13:22.559245 | orchestrator | 2026-01-03 02:13:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:25.607901 | orchestrator | 2026-01-03 02:13:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:13:25.609746 | orchestrator | 2026-01-03 02:13:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:13:25.609803 | orchestrator | 2026-01-03 02:13:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:28.653810 | orchestrator | 2026-01-03 02:13:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:13:28.655140 | orchestrator | 2026-01-03 02:13:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:13:28.655325 | orchestrator | 2026-01-03 02:13:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:31.702319 | orchestrator | 2026-01-03 02:13:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:13:31.704297 | orchestrator | 2026-01-03 02:13:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:13:31.704382 | orchestrator | 2026-01-03 02:13:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:34.753192 | orchestrator | 2026-01-03 02:13:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:13:34.755224 | orchestrator | 2026-01-03 02:13:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:13:34.755516 | orchestrator | 2026-01-03 02:13:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:37.802903 | orchestrator | 2026-01-03 02:13:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:13:37.804784 | orchestrator | 2026-01-03 02:13:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:13:37.804863 | orchestrator | 2026-01-03 02:13:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:40.853364 | orchestrator | 2026-01-03 02:13:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:13:40.855224 | orchestrator | 2026-01-03 02:13:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:13:40.855293 | orchestrator | 2026-01-03 02:13:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:43.901531 | orchestrator | 2026-01-03 02:13:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:13:43.903366 | orchestrator | 2026-01-03 02:13:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:13:43.903450 | orchestrator | 2026-01-03 02:13:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:46.949883 | orchestrator | 2026-01-03 02:13:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:13:46.951609 | orchestrator | 2026-01-03 02:13:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:13:46.951671 | orchestrator | 2026-01-03 02:13:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:50.002370 | orchestrator | 2026-01-03 02:13:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:13:50.004572 | orchestrator | 2026-01-03 02:13:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:13:50.004642 | orchestrator | 2026-01-03 02:13:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:53.049461 | orchestrator | 2026-01-03 02:13:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:13:53.051251 | orchestrator | 2026-01-03 02:13:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:13:53.051348 | orchestrator | 2026-01-03 02:13:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:56.092827 | orchestrator | 2026-01-03 02:13:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:13:56.095476 | orchestrator | 2026-01-03 02:13:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:13:56.095551 | orchestrator | 2026-01-03 02:13:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:13:59.141148 | orchestrator | 2026-01-03 02:13:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:13:59.143017 | orchestrator | 2026-01-03 02:13:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:13:59.143069 | orchestrator | 2026-01-03 02:13:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:02.194329 | orchestrator | 2026-01-03 02:14:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:14:02.197184 | orchestrator | 2026-01-03 02:14:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:14:02.197237 | orchestrator | 2026-01-03 02:14:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:05.243094 | orchestrator | 2026-01-03 02:14:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:14:05.243676 | orchestrator | 2026-01-03 02:14:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:14:05.244198 | orchestrator | 2026-01-03 02:14:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:08.299308 | orchestrator | 2026-01-03 02:14:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:14:08.301742 | orchestrator | 2026-01-03 02:14:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:14:08.302640 | orchestrator | 2026-01-03 02:14:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:11.343258 | orchestrator | 2026-01-03 02:14:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:14:11.345771 | orchestrator | 2026-01-03 02:14:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:14:11.345873 | orchestrator | 2026-01-03 02:14:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:14.409888 | orchestrator | 2026-01-03 02:14:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:14:14.409982 | orchestrator | 2026-01-03 02:14:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:14:14.409994 | orchestrator | 2026-01-03 02:14:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:17.453476 | orchestrator | 2026-01-03 02:14:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:14:17.455430 | orchestrator | 2026-01-03 02:14:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:14:17.455487 | orchestrator | 2026-01-03 02:14:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:20.499700 | orchestrator | 2026-01-03 02:14:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:14:20.501570 | orchestrator | 2026-01-03 02:14:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:14:20.501946 | orchestrator | 2026-01-03 02:14:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:23.548159 | orchestrator | 2026-01-03 02:14:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:14:23.550178 | orchestrator | 2026-01-03 02:14:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:14:23.550249 | orchestrator | 2026-01-03 02:14:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:26.592417 | orchestrator | 2026-01-03 02:14:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:14:26.595586 | orchestrator | 2026-01-03 02:14:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:14:26.595658 | orchestrator | 2026-01-03 02:14:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:29.644869 | orchestrator | 2026-01-03 02:14:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:14:29.647806 | orchestrator | 2026-01-03 02:14:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:14:29.647895 | orchestrator | 2026-01-03 02:14:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:32.693367 | orchestrator | 2026-01-03 02:14:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:14:32.694718 | orchestrator | 2026-01-03 02:14:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:14:32.694800 | orchestrator | 2026-01-03 02:14:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:35.745771 | orchestrator | 2026-01-03 02:14:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:14:35.747268 | orchestrator | 2026-01-03 02:14:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:14:35.747408 | orchestrator | 2026-01-03 02:14:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:38.796055 | orchestrator | 2026-01-03 02:14:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:14:38.798098 | orchestrator | 2026-01-03 02:14:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:14:38.798161 | orchestrator | 2026-01-03 02:14:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:41.848349 | orchestrator | 2026-01-03 02:14:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:14:41.850871 | orchestrator | 2026-01-03 02:14:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:14:41.850962 | orchestrator | 2026-01-03 02:14:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:44.897114 | orchestrator | 2026-01-03 02:14:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:14:44.898425 | orchestrator | 2026-01-03 02:14:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:14:44.898476 | orchestrator | 2026-01-03 02:14:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:47.945531 | orchestrator | 2026-01-03 02:14:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:14:47.947516 | orchestrator | 2026-01-03 02:14:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:14:47.947611 | orchestrator | 2026-01-03 02:14:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:50.997566 | orchestrator | 2026-01-03 02:14:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:14:50.999625 | orchestrator | 2026-01-03 02:14:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:14:50.999728 | orchestrator | 2026-01-03 02:14:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:54.048696 | orchestrator | 2026-01-03 02:14:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:14:54.050380 | orchestrator | 2026-01-03 02:14:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:14:54.050463 | orchestrator | 2026-01-03 02:14:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:14:57.088636 | orchestrator | 2026-01-03 02:14:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:14:57.088779 | orchestrator | 2026-01-03 02:14:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:14:57.088794 | orchestrator | 2026-01-03 02:14:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:00.135906 | orchestrator | 2026-01-03 02:15:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:15:00.137473 | orchestrator | 2026-01-03 02:15:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:15:00.137931 | orchestrator | 2026-01-03 02:15:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:03.186058 | orchestrator | 2026-01-03 02:15:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:15:03.188231 | orchestrator | 2026-01-03 02:15:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:15:03.215494 | orchestrator | 2026-01-03 02:15:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:06.245838 | orchestrator | 2026-01-03 02:15:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:15:06.248595 | orchestrator | 2026-01-03 02:15:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:15:06.248650 | orchestrator | 2026-01-03 02:15:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:09.299878 | orchestrator | 2026-01-03 02:15:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:15:09.301484 | orchestrator | 2026-01-03 02:15:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:15:09.301528 | orchestrator | 2026-01-03 02:15:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:12.352476 | orchestrator | 2026-01-03 02:15:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:15:12.354570 | orchestrator | 2026-01-03 02:15:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:15:12.354651 | orchestrator | 2026-01-03 02:15:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:15.411853 | orchestrator | 2026-01-03 02:15:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:15:15.414664 | orchestrator | 2026-01-03 02:15:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:15:15.414731 | orchestrator | 2026-01-03 02:15:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:18.460572 | orchestrator | 2026-01-03 02:15:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:15:18.462175 | orchestrator | 2026-01-03 02:15:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:15:18.462281 | orchestrator | 2026-01-03 02:15:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:21.509627 | orchestrator | 2026-01-03 02:15:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:15:21.510747 | orchestrator | 2026-01-03 02:15:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:15:21.510789 | orchestrator | 2026-01-03 02:15:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:24.562200 | orchestrator | 2026-01-03 02:15:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:15:24.563678 | orchestrator | 2026-01-03 02:15:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:15:24.563803 | orchestrator | 2026-01-03 02:15:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:27.608553 | orchestrator | 2026-01-03 02:15:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:15:27.610918 | orchestrator | 2026-01-03 02:15:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:15:27.610990 | orchestrator | 2026-01-03 02:15:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:30.655919 | orchestrator | 2026-01-03 02:15:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:15:30.657735 | orchestrator | 2026-01-03 02:15:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:15:30.657838 | orchestrator | 2026-01-03 02:15:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:33.706452 | orchestrator | 2026-01-03 02:15:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:15:33.707494 | orchestrator | 2026-01-03 02:15:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:15:33.707538 | orchestrator | 2026-01-03 02:15:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:36.758932 | orchestrator | 2026-01-03 02:15:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:15:36.760780 | orchestrator | 2026-01-03 02:15:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:15:36.760832 | orchestrator | 2026-01-03 02:15:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:39.808758 | orchestrator | 2026-01-03 02:15:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:15:39.810240 | orchestrator | 2026-01-03 02:15:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:15:39.810489 | orchestrator | 2026-01-03 02:15:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:42.860012 | orchestrator | 2026-01-03 02:15:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:15:42.862503 | orchestrator | 2026-01-03 02:15:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:15:42.862547 | orchestrator | 2026-01-03 02:15:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:45.913052 | orchestrator | 2026-01-03 02:15:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:15:45.916012 | orchestrator | 2026-01-03 02:15:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:15:45.916100 | orchestrator | 2026-01-03 02:15:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:48.965770 | orchestrator | 2026-01-03 02:15:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:15:48.966736 | orchestrator | 2026-01-03 02:15:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:15:48.966809 | orchestrator | 2026-01-03 02:15:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:52.019387 | orchestrator | 2026-01-03 02:15:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:15:52.022065 | orchestrator | 2026-01-03 02:15:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:15:52.022205 | orchestrator | 2026-01-03 02:15:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:55.065800 | orchestrator | 2026-01-03 02:15:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:15:55.066388 | orchestrator | 2026-01-03 02:15:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:15:55.066449 | orchestrator | 2026-01-03 02:15:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:15:58.108680 | orchestrator | 2026-01-03 02:15:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:15:58.109822 | orchestrator | 2026-01-03 02:15:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:15:58.109862 | orchestrator | 2026-01-03 02:15:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:01.147606 | orchestrator | 2026-01-03 02:16:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:16:01.149788 | orchestrator | 2026-01-03 02:16:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:16:01.149863 | orchestrator | 2026-01-03 02:16:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:04.187450 | orchestrator | 2026-01-03 02:16:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:16:04.189095 | orchestrator | 2026-01-03 02:16:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:16:04.189155 | orchestrator | 2026-01-03 02:16:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:07.235934 | orchestrator | 2026-01-03 02:16:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:16:07.238220 | orchestrator | 2026-01-03 02:16:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:16:07.238754 | orchestrator | 2026-01-03 02:16:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:10.288663 | orchestrator | 2026-01-03 02:16:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:16:10.288959 | orchestrator | 2026-01-03 02:16:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:16:10.289025 | orchestrator | 2026-01-03 02:16:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:13.338179 | orchestrator | 2026-01-03 02:16:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:16:13.340237 | orchestrator | 2026-01-03 02:16:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:16:13.340285 | orchestrator | 2026-01-03 02:16:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:16.392690 | orchestrator | 2026-01-03 02:16:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:16:16.395008 | orchestrator | 2026-01-03 02:16:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:16:16.395081 | orchestrator | 2026-01-03 02:16:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:19.442592 | orchestrator | 2026-01-03 02:16:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:16:19.443946 | orchestrator | 2026-01-03 02:16:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:16:19.444037 | orchestrator | 2026-01-03 02:16:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:22.486597 | orchestrator | 2026-01-03 02:16:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:16:22.487970 | orchestrator | 2026-01-03 02:16:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:16:22.488117 | orchestrator | 2026-01-03 02:16:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:25.537742 | orchestrator | 2026-01-03 02:16:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:16:25.538693 | orchestrator | 2026-01-03 02:16:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:16:25.538739 | orchestrator | 2026-01-03 02:16:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:28.589237 | orchestrator | 2026-01-03 02:16:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:16:28.590682 | orchestrator | 2026-01-03 02:16:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:16:28.590836 | orchestrator | 2026-01-03 02:16:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:31.634909 | orchestrator | 2026-01-03 02:16:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:16:31.636204 | orchestrator | 2026-01-03 02:16:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:16:31.636258 | orchestrator | 2026-01-03 02:16:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:34.681901 | orchestrator | 2026-01-03 02:16:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:16:34.683488 | orchestrator | 2026-01-03 02:16:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:16:34.683523 | orchestrator | 2026-01-03 02:16:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:37.732305 | orchestrator | 2026-01-03 02:16:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:16:37.733814 | orchestrator | 2026-01-03 02:16:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:16:37.733858 | orchestrator | 2026-01-03 02:16:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:40.788269 | orchestrator | 2026-01-03 02:16:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:16:40.789705 | orchestrator | 2026-01-03 02:16:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:16:40.789822 | orchestrator | 2026-01-03 02:16:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:43.837065 | orchestrator | 2026-01-03 02:16:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:16:43.838876 | orchestrator | 2026-01-03 02:16:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:16:43.838930 | orchestrator | 2026-01-03 02:16:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:46.882523 | orchestrator | 2026-01-03 02:16:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:16:46.883923 | orchestrator | 2026-01-03 02:16:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:16:46.884012 | orchestrator | 2026-01-03 02:16:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:49.928831 | orchestrator | 2026-01-03 02:16:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:16:49.930514 | orchestrator | 2026-01-03 02:16:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:16:49.930550 | orchestrator | 2026-01-03 02:16:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:52.972956 | orchestrator | 2026-01-03 02:16:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:16:52.974864 | orchestrator | 2026-01-03 02:16:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:16:52.974929 | orchestrator | 2026-01-03 02:16:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:56.020913 | orchestrator | 2026-01-03 02:16:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:16:56.023076 | orchestrator | 2026-01-03 02:16:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:16:56.023251 | orchestrator | 2026-01-03 02:16:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:16:59.070387 | orchestrator | 2026-01-03 02:16:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:16:59.071391 | orchestrator | 2026-01-03 02:16:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:16:59.071422 | orchestrator | 2026-01-03 02:16:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:02.119753 | orchestrator | 2026-01-03 02:17:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:17:02.121629 | orchestrator | 2026-01-03 02:17:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:17:02.121693 | orchestrator | 2026-01-03 02:17:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:05.163408 | orchestrator | 2026-01-03 02:17:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:17:05.166404 | orchestrator | 2026-01-03 02:17:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:17:05.166476 | orchestrator | 2026-01-03 02:17:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:08.216778 | orchestrator | 2026-01-03 02:17:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:17:08.218469 | orchestrator | 2026-01-03 02:17:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:17:08.218531 | orchestrator | 2026-01-03 02:17:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:11.268852 | orchestrator | 2026-01-03 02:17:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:17:11.270659 | orchestrator | 2026-01-03 02:17:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:17:11.270741 | orchestrator | 2026-01-03 02:17:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:14.318624 | orchestrator | 2026-01-03 02:17:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:17:14.319847 | orchestrator | 2026-01-03 02:17:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:17:14.319895 | orchestrator | 2026-01-03 02:17:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:17.366597 | orchestrator | 2026-01-03 02:17:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:17:17.367507 | orchestrator | 2026-01-03 02:17:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:17:17.367551 | orchestrator | 2026-01-03 02:17:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:20.412512 | orchestrator | 2026-01-03 02:17:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:17:20.414611 | orchestrator | 2026-01-03 02:17:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:17:20.414676 | orchestrator | 2026-01-03 02:17:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:23.464441 | orchestrator | 2026-01-03 02:17:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:17:23.466118 | orchestrator | 2026-01-03 02:17:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:17:23.466180 | orchestrator | 2026-01-03 02:17:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:26.516555 | orchestrator | 2026-01-03 02:17:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:17:26.518845 | orchestrator | 2026-01-03 02:17:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:17:26.518907 | orchestrator | 2026-01-03 02:17:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:29.561426 | orchestrator | 2026-01-03 02:17:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:17:29.563665 | orchestrator | 2026-01-03 02:17:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:17:29.563749 | orchestrator | 2026-01-03 02:17:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:32.615269 | orchestrator | 2026-01-03 02:17:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:17:32.617051 | orchestrator | 2026-01-03 02:17:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:17:32.617141 | orchestrator | 2026-01-03 02:17:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:35.666422 | orchestrator | 2026-01-03 02:17:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:17:35.668244 | orchestrator | 2026-01-03 02:17:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:17:35.668399 | orchestrator | 2026-01-03 02:17:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:38.721002 | orchestrator | 2026-01-03 02:17:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:17:38.722788 | orchestrator | 2026-01-03 02:17:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:17:38.722836 | orchestrator | 2026-01-03 02:17:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:41.772098 | orchestrator | 2026-01-03 02:17:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:17:41.773775 | orchestrator | 2026-01-03 02:17:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:17:41.773875 | orchestrator | 2026-01-03 02:17:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:44.818783 | orchestrator | 2026-01-03 02:17:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:17:44.820747 | orchestrator | 2026-01-03 02:17:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:17:44.820816 | orchestrator | 2026-01-03 02:17:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:47.859065 | orchestrator | 2026-01-03 02:17:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:17:47.860112 | orchestrator | 2026-01-03 02:17:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:17:47.860186 | orchestrator | 2026-01-03 02:17:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:50.904250 | orchestrator | 2026-01-03 02:17:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:17:50.905588 | orchestrator | 2026-01-03 02:17:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:17:50.905708 | orchestrator | 2026-01-03 02:17:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:53.950398 | orchestrator | 2026-01-03 02:17:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:17:53.952730 | orchestrator | 2026-01-03 02:17:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:17:53.952785 | orchestrator | 2026-01-03 02:17:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:17:57.006844 | orchestrator | 2026-01-03 02:17:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:17:57.007546 | orchestrator | 2026-01-03 02:17:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:17:57.007673 | orchestrator | 2026-01-03 02:17:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:00.061291 | orchestrator | 2026-01-03 02:18:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:18:00.063432 | orchestrator | 2026-01-03 02:18:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:18:00.063499 | orchestrator | 2026-01-03 02:18:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:03.106174 | orchestrator | 2026-01-03 02:18:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:18:03.107193 | orchestrator | 2026-01-03 02:18:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:18:03.107241 | orchestrator | 2026-01-03 02:18:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:06.153040 | orchestrator | 2026-01-03 02:18:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:18:06.156270 | orchestrator | 2026-01-03 02:18:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:18:06.156533 | orchestrator | 2026-01-03 02:18:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:09.207227 | orchestrator | 2026-01-03 02:18:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:18:09.208263 | orchestrator | 2026-01-03 02:18:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:18:09.208407 | orchestrator | 2026-01-03 02:18:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:12.263496 | orchestrator | 2026-01-03 02:18:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:18:12.266234 | orchestrator | 2026-01-03 02:18:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:18:12.266300 | orchestrator | 2026-01-03 02:18:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:15.310856 | orchestrator | 2026-01-03 02:18:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:18:15.312091 | orchestrator | 2026-01-03 02:18:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:18:15.312179 | orchestrator | 2026-01-03 02:18:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:18.358848 | orchestrator | 2026-01-03 02:18:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:18:18.360725 | orchestrator | 2026-01-03 02:18:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:18:18.360772 | orchestrator | 2026-01-03 02:18:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:21.399234 | orchestrator | 2026-01-03 02:18:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:18:21.401089 | orchestrator | 2026-01-03 02:18:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:18:21.401154 | orchestrator | 2026-01-03 02:18:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:24.446290 | orchestrator | 2026-01-03 02:18:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:18:24.448127 | orchestrator | 2026-01-03 02:18:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:18:24.448171 | orchestrator | 2026-01-03 02:18:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:27.499238 | orchestrator | 2026-01-03 02:18:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:18:27.500752 | orchestrator | 2026-01-03 02:18:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:18:27.500812 | orchestrator | 2026-01-03 02:18:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:30.545026 | orchestrator | 2026-01-03 02:18:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:18:30.546914 | orchestrator | 2026-01-03 02:18:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:18:30.547018 | orchestrator | 2026-01-03 02:18:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:33.596915 | orchestrator | 2026-01-03 02:18:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:18:33.598505 | orchestrator | 2026-01-03 02:18:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:18:33.598553 | orchestrator | 2026-01-03 02:18:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:36.643869 | orchestrator | 2026-01-03 02:18:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:18:36.645534 | orchestrator | 2026-01-03 02:18:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:18:36.645627 | orchestrator | 2026-01-03 02:18:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:39.693439 | orchestrator | 2026-01-03 02:18:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:18:39.695318 | orchestrator | 2026-01-03 02:18:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:18:39.695393 | orchestrator | 2026-01-03 02:18:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:42.745169 | orchestrator | 2026-01-03 02:18:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:18:42.747207 | orchestrator | 2026-01-03 02:18:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:18:42.747456 | orchestrator | 2026-01-03 02:18:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:45.797473 | orchestrator | 2026-01-03 02:18:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:18:45.798924 | orchestrator | 2026-01-03 02:18:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:18:45.799071 | orchestrator | 2026-01-03 02:18:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:48.842222 | orchestrator | 2026-01-03 02:18:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:18:48.843894 | orchestrator | 2026-01-03 02:18:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:18:48.843963 | orchestrator | 2026-01-03 02:18:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:51.890123 | orchestrator | 2026-01-03 02:18:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:18:51.891867 | orchestrator | 2026-01-03 02:18:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:18:51.892074 | orchestrator | 2026-01-03 02:18:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:54.934806 | orchestrator | 2026-01-03 02:18:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:18:54.938165 | orchestrator | 2026-01-03 02:18:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:18:54.938288 | orchestrator | 2026-01-03 02:18:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:18:57.984983 | orchestrator | 2026-01-03 02:18:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:18:57.985365 | orchestrator | 2026-01-03 02:18:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:18:57.987873 | orchestrator | 2026-01-03 02:18:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:01.021488 | orchestrator | 2026-01-03 02:19:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:19:01.023914 | orchestrator | 2026-01-03 02:19:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:19:01.023966 | orchestrator | 2026-01-03 02:19:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:04.066819 | orchestrator | 2026-01-03 02:19:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:19:04.068265 | orchestrator | 2026-01-03 02:19:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:19:04.068327 | orchestrator | 2026-01-03 02:19:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:07.120214 | orchestrator | 2026-01-03 02:19:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:19:07.121867 | orchestrator | 2026-01-03 02:19:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:19:07.121916 | orchestrator | 2026-01-03 02:19:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:10.162741 | orchestrator | 2026-01-03 02:19:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:19:10.164654 | orchestrator | 2026-01-03 02:19:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:19:10.164704 | orchestrator | 2026-01-03 02:19:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:13.209890 | orchestrator | 2026-01-03 02:19:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:19:13.211183 | orchestrator | 2026-01-03 02:19:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:19:13.211245 | orchestrator | 2026-01-03 02:19:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:16.257680 | orchestrator | 2026-01-03 02:19:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:19:16.258678 | orchestrator | 2026-01-03 02:19:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:19:16.258746 | orchestrator | 2026-01-03 02:19:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:19.304670 | orchestrator | 2026-01-03 02:19:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:19:19.306578 | orchestrator | 2026-01-03 02:19:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:19:19.306653 | orchestrator | 2026-01-03 02:19:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:22.353825 | orchestrator | 2026-01-03 02:19:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:19:22.354644 | orchestrator | 2026-01-03 02:19:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:19:22.461919 | orchestrator | 2026-01-03 02:19:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:25.396974 | orchestrator | 2026-01-03 02:19:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:19:25.397657 | orchestrator | 2026-01-03 02:19:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:19:25.397771 | orchestrator | 2026-01-03 02:19:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:28.444977 | orchestrator | 2026-01-03 02:19:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:19:28.446942 | orchestrator | 2026-01-03 02:19:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:19:28.446991 | orchestrator | 2026-01-03 02:19:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:31.490260 | orchestrator | 2026-01-03 02:19:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:19:31.491990 | orchestrator | 2026-01-03 02:19:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:19:31.492056 | orchestrator | 2026-01-03 02:19:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:34.538245 | orchestrator | 2026-01-03 02:19:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:19:34.539213 | orchestrator | 2026-01-03 02:19:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:19:34.539263 | orchestrator | 2026-01-03 02:19:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:37.579646 | orchestrator | 2026-01-03 02:19:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:19:37.582273 | orchestrator | 2026-01-03 02:19:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:19:37.582445 | orchestrator | 2026-01-03 02:19:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:40.635139 | orchestrator | 2026-01-03 02:19:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:19:40.637115 | orchestrator | 2026-01-03 02:19:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:19:40.637186 | orchestrator | 2026-01-03 02:19:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:43.689124 | orchestrator | 2026-01-03 02:19:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:19:43.690655 | orchestrator | 2026-01-03 02:19:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:19:43.690965 | orchestrator | 2026-01-03 02:19:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:46.738969 | orchestrator | 2026-01-03 02:19:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:19:46.740617 | orchestrator | 2026-01-03 02:19:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:19:46.740684 | orchestrator | 2026-01-03 02:19:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:49.780942 | orchestrator | 2026-01-03 02:19:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:19:49.783061 | orchestrator | 2026-01-03 02:19:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:19:49.783424 | orchestrator | 2026-01-03 02:19:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:52.833845 | orchestrator | 2026-01-03 02:19:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:19:52.835095 | orchestrator | 2026-01-03 02:19:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:19:52.835161 | orchestrator | 2026-01-03 02:19:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:55.878743 | orchestrator | 2026-01-03 02:19:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:19:55.880249 | orchestrator | 2026-01-03 02:19:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:19:55.880295 | orchestrator | 2026-01-03 02:19:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:19:58.931700 | orchestrator | 2026-01-03 02:19:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:19:58.933195 | orchestrator | 2026-01-03 02:19:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:19:58.933237 | orchestrator | 2026-01-03 02:19:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:01.982065 | orchestrator | 2026-01-03 02:20:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:20:01.983766 | orchestrator | 2026-01-03 02:20:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:20:01.983804 | orchestrator | 2026-01-03 02:20:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:05.028441 | orchestrator | 2026-01-03 02:20:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:20:05.029822 | orchestrator | 2026-01-03 02:20:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:20:05.029910 | orchestrator | 2026-01-03 02:20:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:08.074398 | orchestrator | 2026-01-03 02:20:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:20:08.076248 | orchestrator | 2026-01-03 02:20:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:20:08.076309 | orchestrator | 2026-01-03 02:20:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:11.118662 | orchestrator | 2026-01-03 02:20:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:20:11.120728 | orchestrator | 2026-01-03 02:20:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:20:11.120790 | orchestrator | 2026-01-03 02:20:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:14.164946 | orchestrator | 2026-01-03 02:20:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:20:14.168274 | orchestrator | 2026-01-03 02:20:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:20:14.168414 | orchestrator | 2026-01-03 02:20:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:17.210644 | orchestrator | 2026-01-03 02:20:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:20:17.212706 | orchestrator | 2026-01-03 02:20:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:20:17.212788 | orchestrator | 2026-01-03 02:20:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:20.257521 | orchestrator | 2026-01-03 02:20:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:20:20.258011 | orchestrator | 2026-01-03 02:20:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:20:20.258136 | orchestrator | 2026-01-03 02:20:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:23.306402 | orchestrator | 2026-01-03 02:20:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:20:23.308613 | orchestrator | 2026-01-03 02:20:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:20:23.308739 | orchestrator | 2026-01-03 02:20:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:26.355502 | orchestrator | 2026-01-03 02:20:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:20:26.358749 | orchestrator | 2026-01-03 02:20:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:20:26.358847 | orchestrator | 2026-01-03 02:20:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:29.407664 | orchestrator | 2026-01-03 02:20:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:20:29.409820 | orchestrator | 2026-01-03 02:20:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:20:29.411962 | orchestrator | 2026-01-03 02:20:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:32.456244 | orchestrator | 2026-01-03 02:20:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:20:32.458849 | orchestrator | 2026-01-03 02:20:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:20:32.458948 | orchestrator | 2026-01-03 02:20:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:35.516127 | orchestrator | 2026-01-03 02:20:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:20:35.518754 | orchestrator | 2026-01-03 02:20:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:20:35.518893 | orchestrator | 2026-01-03 02:20:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:38.568237 | orchestrator | 2026-01-03 02:20:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:20:38.570862 | orchestrator | 2026-01-03 02:20:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:20:38.570950 | orchestrator | 2026-01-03 02:20:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:41.618917 | orchestrator | 2026-01-03 02:20:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:20:41.621195 | orchestrator | 2026-01-03 02:20:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:20:41.621269 | orchestrator | 2026-01-03 02:20:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:44.674275 | orchestrator | 2026-01-03 02:20:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:20:44.676679 | orchestrator | 2026-01-03 02:20:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:20:44.676861 | orchestrator | 2026-01-03 02:20:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:47.729517 | orchestrator | 2026-01-03 02:20:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:20:47.733727 | orchestrator | 2026-01-03 02:20:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:20:47.733797 | orchestrator | 2026-01-03 02:20:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:50.785913 | orchestrator | 2026-01-03 02:20:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:20:50.789074 | orchestrator | 2026-01-03 02:20:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:20:50.789181 | orchestrator | 2026-01-03 02:20:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:53.836867 | orchestrator | 2026-01-03 02:20:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:20:53.838991 | orchestrator | 2026-01-03 02:20:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:20:53.839066 | orchestrator | 2026-01-03 02:20:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:56.887843 | orchestrator | 2026-01-03 02:20:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:20:56.889523 | orchestrator | 2026-01-03 02:20:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:20:56.889750 | orchestrator | 2026-01-03 02:20:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:20:59.937316 | orchestrator | 2026-01-03 02:20:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:20:59.940011 | orchestrator | 2026-01-03 02:20:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:20:59.940173 | orchestrator | 2026-01-03 02:20:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:02.985991 | orchestrator | 2026-01-03 02:21:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:21:02.987741 | orchestrator | 2026-01-03 02:21:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:21:02.987846 | orchestrator | 2026-01-03 02:21:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:06.043703 | orchestrator | 2026-01-03 02:21:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:21:06.045523 | orchestrator | 2026-01-03 02:21:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:21:06.045706 | orchestrator | 2026-01-03 02:21:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:09.099989 | orchestrator | 2026-01-03 02:21:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:21:09.100770 | orchestrator | 2026-01-03 02:21:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:21:09.100880 | orchestrator | 2026-01-03 02:21:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:12.146668 | orchestrator | 2026-01-03 02:21:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:21:12.148052 | orchestrator | 2026-01-03 02:21:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:21:12.148142 | orchestrator | 2026-01-03 02:21:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:15.192234 | orchestrator | 2026-01-03 02:21:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:21:15.192922 | orchestrator | 2026-01-03 02:21:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:21:15.193251 | orchestrator | 2026-01-03 02:21:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:18.243293 | orchestrator | 2026-01-03 02:21:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:21:18.245662 | orchestrator | 2026-01-03 02:21:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:21:18.245740 | orchestrator | 2026-01-03 02:21:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:21.292336 | orchestrator | 2026-01-03 02:21:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:21:21.294792 | orchestrator | 2026-01-03 02:21:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:21:21.294853 | orchestrator | 2026-01-03 02:21:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:24.342306 | orchestrator | 2026-01-03 02:21:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:21:24.345022 | orchestrator | 2026-01-03 02:21:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:21:24.345156 | orchestrator | 2026-01-03 02:21:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:27.396141 | orchestrator | 2026-01-03 02:21:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:21:27.397829 | orchestrator | 2026-01-03 02:21:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:21:27.397878 | orchestrator | 2026-01-03 02:21:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:30.439880 | orchestrator | 2026-01-03 02:21:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:21:30.441018 | orchestrator | 2026-01-03 02:21:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:21:30.441067 | orchestrator | 2026-01-03 02:21:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:33.494204 | orchestrator | 2026-01-03 02:21:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:21:33.496834 | orchestrator | 2026-01-03 02:21:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:21:33.496954 | orchestrator | 2026-01-03 02:21:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:36.544634 | orchestrator | 2026-01-03 02:21:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:21:36.546233 | orchestrator | 2026-01-03 02:21:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:21:36.546269 | orchestrator | 2026-01-03 02:21:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:39.595943 | orchestrator | 2026-01-03 02:21:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:21:39.597623 | orchestrator | 2026-01-03 02:21:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:21:39.597748 | orchestrator | 2026-01-03 02:21:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:42.648614 | orchestrator | 2026-01-03 02:21:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:21:42.651526 | orchestrator | 2026-01-03 02:21:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:21:42.651644 | orchestrator | 2026-01-03 02:21:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:45.696881 | orchestrator | 2026-01-03 02:21:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:21:45.698252 | orchestrator | 2026-01-03 02:21:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:21:45.698309 | orchestrator | 2026-01-03 02:21:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:48.746930 | orchestrator | 2026-01-03 02:21:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:21:48.749079 | orchestrator | 2026-01-03 02:21:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:21:48.749184 | orchestrator | 2026-01-03 02:21:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:51.793256 | orchestrator | 2026-01-03 02:21:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:21:51.798329 | orchestrator | 2026-01-03 02:21:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:21:51.798489 | orchestrator | 2026-01-03 02:21:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:54.841084 | orchestrator | 2026-01-03 02:21:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:21:54.843388 | orchestrator | 2026-01-03 02:21:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:21:54.843480 | orchestrator | 2026-01-03 02:21:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:21:57.886885 | orchestrator | 2026-01-03 02:21:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:21:57.889405 | orchestrator | 2026-01-03 02:21:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:21:57.889475 | orchestrator | 2026-01-03 02:21:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:00.938821 | orchestrator | 2026-01-03 02:22:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:22:00.940691 | orchestrator | 2026-01-03 02:22:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:22:00.940783 | orchestrator | 2026-01-03 02:22:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:03.988141 | orchestrator | 2026-01-03 02:22:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:22:03.990164 | orchestrator | 2026-01-03 02:22:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:22:03.990259 | orchestrator | 2026-01-03 02:22:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:07.041920 | orchestrator | 2026-01-03 02:22:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:22:07.043942 | orchestrator | 2026-01-03 02:22:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:22:07.044040 | orchestrator | 2026-01-03 02:22:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:10.089709 | orchestrator | 2026-01-03 02:22:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:22:10.091649 | orchestrator | 2026-01-03 02:22:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:22:10.091730 | orchestrator | 2026-01-03 02:22:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:13.137655 | orchestrator | 2026-01-03 02:22:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:22:13.139737 | orchestrator | 2026-01-03 02:22:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:22:13.139795 | orchestrator | 2026-01-03 02:22:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:16.192745 | orchestrator | 2026-01-03 02:22:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:22:16.195565 | orchestrator | 2026-01-03 02:22:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:22:16.195654 | orchestrator | 2026-01-03 02:22:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:19.240118 | orchestrator | 2026-01-03 02:22:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:22:19.241696 | orchestrator | 2026-01-03 02:22:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:22:19.241750 | orchestrator | 2026-01-03 02:22:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:22.281686 | orchestrator | 2026-01-03 02:22:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:22:22.283293 | orchestrator | 2026-01-03 02:22:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:22:22.283358 | orchestrator | 2026-01-03 02:22:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:25.330619 | orchestrator | 2026-01-03 02:22:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:22:25.332098 | orchestrator | 2026-01-03 02:22:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:22:25.332151 | orchestrator | 2026-01-03 02:22:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:28.380410 | orchestrator | 2026-01-03 02:22:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:22:28.383925 | orchestrator | 2026-01-03 02:22:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:22:28.384021 | orchestrator | 2026-01-03 02:22:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:31.434671 | orchestrator | 2026-01-03 02:22:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:22:31.434887 | orchestrator | 2026-01-03 02:22:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:22:31.434937 | orchestrator | 2026-01-03 02:22:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:34.486117 | orchestrator | 2026-01-03 02:22:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:22:34.488720 | orchestrator | 2026-01-03 02:22:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:22:34.488854 | orchestrator | 2026-01-03 02:22:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:37.540661 | orchestrator | 2026-01-03 02:22:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:22:37.543647 | orchestrator | 2026-01-03 02:22:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:22:37.544086 | orchestrator | 2026-01-03 02:22:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:40.586544 | orchestrator | 2026-01-03 02:22:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:22:40.586809 | orchestrator | 2026-01-03 02:22:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:22:40.587324 | orchestrator | 2026-01-03 02:22:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:43.630263 | orchestrator | 2026-01-03 02:22:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:22:43.632590 | orchestrator | 2026-01-03 02:22:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:22:43.632649 | orchestrator | 2026-01-03 02:22:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:46.680011 | orchestrator | 2026-01-03 02:22:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:22:46.681402 | orchestrator | 2026-01-03 02:22:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:22:46.681440 | orchestrator | 2026-01-03 02:22:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:49.730628 | orchestrator | 2026-01-03 02:22:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:22:49.734540 | orchestrator | 2026-01-03 02:22:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:22:49.734620 | orchestrator | 2026-01-03 02:22:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:52.783385 | orchestrator | 2026-01-03 02:22:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:22:52.784809 | orchestrator | 2026-01-03 02:22:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:22:52.784867 | orchestrator | 2026-01-03 02:22:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:55.831599 | orchestrator | 2026-01-03 02:22:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:22:55.833466 | orchestrator | 2026-01-03 02:22:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:22:55.833589 | orchestrator | 2026-01-03 02:22:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:22:58.876412 | orchestrator | 2026-01-03 02:22:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:22:58.879411 | orchestrator | 2026-01-03 02:22:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:22:58.879484 | orchestrator | 2026-01-03 02:22:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:01.925317 | orchestrator | 2026-01-03 02:23:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:23:01.927855 | orchestrator | 2026-01-03 02:23:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:23:01.927919 | orchestrator | 2026-01-03 02:23:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:04.970269 | orchestrator | 2026-01-03 02:23:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:23:04.972138 | orchestrator | 2026-01-03 02:23:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:23:04.972240 | orchestrator | 2026-01-03 02:23:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:08.023881 | orchestrator | 2026-01-03 02:23:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:23:08.025716 | orchestrator | 2026-01-03 02:23:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:23:08.025832 | orchestrator | 2026-01-03 02:23:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:11.072908 | orchestrator | 2026-01-03 02:23:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:23:11.074586 | orchestrator | 2026-01-03 02:23:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:23:11.074683 | orchestrator | 2026-01-03 02:23:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:14.119958 | orchestrator | 2026-01-03 02:23:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:23:14.121091 | orchestrator | 2026-01-03 02:23:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:23:14.121133 | orchestrator | 2026-01-03 02:23:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:17.171270 | orchestrator | 2026-01-03 02:23:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:23:17.173367 | orchestrator | 2026-01-03 02:23:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:23:17.173448 | orchestrator | 2026-01-03 02:23:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:20.222110 | orchestrator | 2026-01-03 02:23:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:23:20.225261 | orchestrator | 2026-01-03 02:23:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:23:20.225493 | orchestrator | 2026-01-03 02:23:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:23.273795 | orchestrator | 2026-01-03 02:23:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:23:23.275856 | orchestrator | 2026-01-03 02:23:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:23:23.275928 | orchestrator | 2026-01-03 02:23:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:26.319519 | orchestrator | 2026-01-03 02:23:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:23:26.321026 | orchestrator | 2026-01-03 02:23:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:23:26.321322 | orchestrator | 2026-01-03 02:23:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:29.363452 | orchestrator | 2026-01-03 02:23:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:23:29.364608 | orchestrator | 2026-01-03 02:23:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:23:29.364658 | orchestrator | 2026-01-03 02:23:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:32.415688 | orchestrator | 2026-01-03 02:23:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:23:32.417974 | orchestrator | 2026-01-03 02:23:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:23:32.418098 | orchestrator | 2026-01-03 02:23:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:35.461476 | orchestrator | 2026-01-03 02:23:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:23:35.463825 | orchestrator | 2026-01-03 02:23:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:23:35.463963 | orchestrator | 2026-01-03 02:23:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:38.514641 | orchestrator | 2026-01-03 02:23:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:23:38.516333 | orchestrator | 2026-01-03 02:23:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:23:38.516400 | orchestrator | 2026-01-03 02:23:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:41.562434 | orchestrator | 2026-01-03 02:23:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:23:41.563756 | orchestrator | 2026-01-03 02:23:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:23:41.563807 | orchestrator | 2026-01-03 02:23:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:44.609182 | orchestrator | 2026-01-03 02:23:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:23:44.609502 | orchestrator | 2026-01-03 02:23:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:23:44.609527 | orchestrator | 2026-01-03 02:23:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:47.662621 | orchestrator | 2026-01-03 02:23:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:23:47.664445 | orchestrator | 2026-01-03 02:23:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:23:47.664527 | orchestrator | 2026-01-03 02:23:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:50.710655 | orchestrator | 2026-01-03 02:23:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:23:50.712286 | orchestrator | 2026-01-03 02:23:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:23:50.712331 | orchestrator | 2026-01-03 02:23:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:53.756517 | orchestrator | 2026-01-03 02:23:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:23:53.758128 | orchestrator | 2026-01-03 02:23:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:23:53.758181 | orchestrator | 2026-01-03 02:23:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:56.805408 | orchestrator | 2026-01-03 02:23:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:23:56.806827 | orchestrator | 2026-01-03 02:23:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:23:56.806880 | orchestrator | 2026-01-03 02:23:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:23:59.850693 | orchestrator | 2026-01-03 02:23:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:23:59.852325 | orchestrator | 2026-01-03 02:23:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:23:59.852381 | orchestrator | 2026-01-03 02:23:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:02.908694 | orchestrator | 2026-01-03 02:24:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:24:02.910872 | orchestrator | 2026-01-03 02:24:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:24:02.910917 | orchestrator | 2026-01-03 02:24:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:05.957271 | orchestrator | 2026-01-03 02:24:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:24:05.959339 | orchestrator | 2026-01-03 02:24:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:24:05.959418 | orchestrator | 2026-01-03 02:24:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:09.010547 | orchestrator | 2026-01-03 02:24:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:24:09.019674 | orchestrator | 2026-01-03 02:24:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:24:09.019787 | orchestrator | 2026-01-03 02:24:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:12.067774 | orchestrator | 2026-01-03 02:24:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:24:12.070114 | orchestrator | 2026-01-03 02:24:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:24:12.070183 | orchestrator | 2026-01-03 02:24:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:15.114620 | orchestrator | 2026-01-03 02:24:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:24:15.115956 | orchestrator | 2026-01-03 02:24:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:24:15.116064 | orchestrator | 2026-01-03 02:24:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:18.162427 | orchestrator | 2026-01-03 02:24:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:24:18.164193 | orchestrator | 2026-01-03 02:24:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:24:18.164623 | orchestrator | 2026-01-03 02:24:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:21.208867 | orchestrator | 2026-01-03 02:24:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:24:21.210491 | orchestrator | 2026-01-03 02:24:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:24:21.210795 | orchestrator | 2026-01-03 02:24:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:24.255322 | orchestrator | 2026-01-03 02:24:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:24:24.256715 | orchestrator | 2026-01-03 02:24:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:24:24.256765 | orchestrator | 2026-01-03 02:24:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:27.294743 | orchestrator | 2026-01-03 02:24:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:24:27.296693 | orchestrator | 2026-01-03 02:24:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:24:27.296775 | orchestrator | 2026-01-03 02:24:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:30.343145 | orchestrator | 2026-01-03 02:24:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:24:30.346284 | orchestrator | 2026-01-03 02:24:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:24:30.346463 | orchestrator | 2026-01-03 02:24:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:33.397737 | orchestrator | 2026-01-03 02:24:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:24:33.399602 | orchestrator | 2026-01-03 02:24:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:24:33.399659 | orchestrator | 2026-01-03 02:24:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:36.459931 | orchestrator | 2026-01-03 02:24:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:24:36.461077 | orchestrator | 2026-01-03 02:24:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:24:36.461155 | orchestrator | 2026-01-03 02:24:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:39.503800 | orchestrator | 2026-01-03 02:24:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:24:39.507003 | orchestrator | 2026-01-03 02:24:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:24:39.507076 | orchestrator | 2026-01-03 02:24:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:42.552422 | orchestrator | 2026-01-03 02:24:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:24:42.554642 | orchestrator | 2026-01-03 02:24:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:24:42.554702 | orchestrator | 2026-01-03 02:24:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:45.599883 | orchestrator | 2026-01-03 02:24:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:24:45.601199 | orchestrator | 2026-01-03 02:24:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:24:45.601245 | orchestrator | 2026-01-03 02:24:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:48.656520 | orchestrator | 2026-01-03 02:24:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:24:48.659161 | orchestrator | 2026-01-03 02:24:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:24:48.659203 | orchestrator | 2026-01-03 02:24:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:51.704748 | orchestrator | 2026-01-03 02:24:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:24:51.706715 | orchestrator | 2026-01-03 02:24:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:24:51.706801 | orchestrator | 2026-01-03 02:24:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:54.751848 | orchestrator | 2026-01-03 02:24:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:24:54.753739 | orchestrator | 2026-01-03 02:24:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:24:54.753797 | orchestrator | 2026-01-03 02:24:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:24:57.798438 | orchestrator | 2026-01-03 02:24:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:24:57.801534 | orchestrator | 2026-01-03 02:24:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:24:57.801634 | orchestrator | 2026-01-03 02:24:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:00.852934 | orchestrator | 2026-01-03 02:25:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:25:00.854455 | orchestrator | 2026-01-03 02:25:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:25:00.854540 | orchestrator | 2026-01-03 02:25:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:03.906563 | orchestrator | 2026-01-03 02:25:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:25:03.908464 | orchestrator | 2026-01-03 02:25:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:25:03.908530 | orchestrator | 2026-01-03 02:25:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:06.958310 | orchestrator | 2026-01-03 02:25:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:25:06.960141 | orchestrator | 2026-01-03 02:25:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:25:06.960657 | orchestrator | 2026-01-03 02:25:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:10.023466 | orchestrator | 2026-01-03 02:25:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:25:10.024933 | orchestrator | 2026-01-03 02:25:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:25:10.025064 | orchestrator | 2026-01-03 02:25:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:13.079308 | orchestrator | 2026-01-03 02:25:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:25:13.081090 | orchestrator | 2026-01-03 02:25:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:25:13.081144 | orchestrator | 2026-01-03 02:25:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:16.129431 | orchestrator | 2026-01-03 02:25:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:25:16.131655 | orchestrator | 2026-01-03 02:25:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:25:16.131780 | orchestrator | 2026-01-03 02:25:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:19.180252 | orchestrator | 2026-01-03 02:25:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:25:19.182566 | orchestrator | 2026-01-03 02:25:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:25:19.182698 | orchestrator | 2026-01-03 02:25:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:22.231936 | orchestrator | 2026-01-03 02:25:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:25:22.233491 | orchestrator | 2026-01-03 02:25:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:25:22.233514 | orchestrator | 2026-01-03 02:25:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:25.281180 | orchestrator | 2026-01-03 02:25:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:25:25.283266 | orchestrator | 2026-01-03 02:25:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:25:25.283432 | orchestrator | 2026-01-03 02:25:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:28.328650 | orchestrator | 2026-01-03 02:25:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:25:28.330576 | orchestrator | 2026-01-03 02:25:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:25:28.330832 | orchestrator | 2026-01-03 02:25:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:31.387329 | orchestrator | 2026-01-03 02:25:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:25:31.389773 | orchestrator | 2026-01-03 02:25:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:25:31.389830 | orchestrator | 2026-01-03 02:25:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:34.440628 | orchestrator | 2026-01-03 02:25:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:25:34.442729 | orchestrator | 2026-01-03 02:25:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:25:34.442789 | orchestrator | 2026-01-03 02:25:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:37.489323 | orchestrator | 2026-01-03 02:25:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:25:37.491867 | orchestrator | 2026-01-03 02:25:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:25:37.491945 | orchestrator | 2026-01-03 02:25:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:40.540778 | orchestrator | 2026-01-03 02:25:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:25:40.543615 | orchestrator | 2026-01-03 02:25:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:25:40.543726 | orchestrator | 2026-01-03 02:25:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:43.586439 | orchestrator | 2026-01-03 02:25:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:25:43.587207 | orchestrator | 2026-01-03 02:25:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:25:43.587242 | orchestrator | 2026-01-03 02:25:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:46.623793 | orchestrator | 2026-01-03 02:25:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:25:46.626424 | orchestrator | 2026-01-03 02:25:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:25:46.626487 | orchestrator | 2026-01-03 02:25:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:49.671250 | orchestrator | 2026-01-03 02:25:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:25:49.673207 | orchestrator | 2026-01-03 02:25:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:25:49.673726 | orchestrator | 2026-01-03 02:25:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:52.715441 | orchestrator | 2026-01-03 02:25:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:25:52.716488 | orchestrator | 2026-01-03 02:25:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:25:52.716522 | orchestrator | 2026-01-03 02:25:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:55.766321 | orchestrator | 2026-01-03 02:25:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:25:55.768817 | orchestrator | 2026-01-03 02:25:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:25:55.768886 | orchestrator | 2026-01-03 02:25:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:25:58.819213 | orchestrator | 2026-01-03 02:25:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:25:58.823431 | orchestrator | 2026-01-03 02:25:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:25:58.823819 | orchestrator | 2026-01-03 02:25:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:01.876838 | orchestrator | 2026-01-03 02:26:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:26:01.879242 | orchestrator | 2026-01-03 02:26:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:26:01.879279 | orchestrator | 2026-01-03 02:26:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:04.926662 | orchestrator | 2026-01-03 02:26:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:26:04.927833 | orchestrator | 2026-01-03 02:26:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:26:04.927881 | orchestrator | 2026-01-03 02:26:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:07.977088 | orchestrator | 2026-01-03 02:26:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:26:07.979138 | orchestrator | 2026-01-03 02:26:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:26:07.980132 | orchestrator | 2026-01-03 02:26:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:11.025940 | orchestrator | 2026-01-03 02:26:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:26:11.027144 | orchestrator | 2026-01-03 02:26:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:26:11.027217 | orchestrator | 2026-01-03 02:26:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:14.070585 | orchestrator | 2026-01-03 02:26:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:26:14.071524 | orchestrator | 2026-01-03 02:26:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:26:14.071553 | orchestrator | 2026-01-03 02:26:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:17.119043 | orchestrator | 2026-01-03 02:26:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:26:17.120080 | orchestrator | 2026-01-03 02:26:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:26:17.120171 | orchestrator | 2026-01-03 02:26:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:20.167864 | orchestrator | 2026-01-03 02:26:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:26:20.168938 | orchestrator | 2026-01-03 02:26:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:26:20.168982 | orchestrator | 2026-01-03 02:26:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:23.212852 | orchestrator | 2026-01-03 02:26:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:26:23.213070 | orchestrator | 2026-01-03 02:26:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:26:23.213090 | orchestrator | 2026-01-03 02:26:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:26.257519 | orchestrator | 2026-01-03 02:26:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:26:26.258925 | orchestrator | 2026-01-03 02:26:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:26:26.258984 | orchestrator | 2026-01-03 02:26:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:29.310451 | orchestrator | 2026-01-03 02:26:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:26:29.313243 | orchestrator | 2026-01-03 02:26:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:26:29.313422 | orchestrator | 2026-01-03 02:26:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:32.355293 | orchestrator | 2026-01-03 02:26:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:26:32.358173 | orchestrator | 2026-01-03 02:26:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:26:32.358331 | orchestrator | 2026-01-03 02:26:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:35.407925 | orchestrator | 2026-01-03 02:26:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:26:35.409293 | orchestrator | 2026-01-03 02:26:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:26:35.409426 | orchestrator | 2026-01-03 02:26:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:38.454877 | orchestrator | 2026-01-03 02:26:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:26:38.456627 | orchestrator | 2026-01-03 02:26:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:26:38.456680 | orchestrator | 2026-01-03 02:26:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:41.498772 | orchestrator | 2026-01-03 02:26:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:26:41.500675 | orchestrator | 2026-01-03 02:26:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:26:41.500735 | orchestrator | 2026-01-03 02:26:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:44.553471 | orchestrator | 2026-01-03 02:26:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:26:44.555250 | orchestrator | 2026-01-03 02:26:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:26:44.555308 | orchestrator | 2026-01-03 02:26:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:47.593551 | orchestrator | 2026-01-03 02:26:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:26:47.595779 | orchestrator | 2026-01-03 02:26:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:26:47.595824 | orchestrator | 2026-01-03 02:26:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:50.647822 | orchestrator | 2026-01-03 02:26:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:26:50.649458 | orchestrator | 2026-01-03 02:26:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:26:50.649510 | orchestrator | 2026-01-03 02:26:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:53.697653 | orchestrator | 2026-01-03 02:26:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:26:53.698243 | orchestrator | 2026-01-03 02:26:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:26:53.698299 | orchestrator | 2026-01-03 02:26:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:56.743106 | orchestrator | 2026-01-03 02:26:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:26:56.745499 | orchestrator | 2026-01-03 02:26:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:26:56.745622 | orchestrator | 2026-01-03 02:26:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:26:59.796770 | orchestrator | 2026-01-03 02:26:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:26:59.799607 | orchestrator | 2026-01-03 02:26:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:26:59.799715 | orchestrator | 2026-01-03 02:26:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:02.848683 | orchestrator | 2026-01-03 02:27:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:27:02.850766 | orchestrator | 2026-01-03 02:27:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:27:02.850821 | orchestrator | 2026-01-03 02:27:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:05.903206 | orchestrator | 2026-01-03 02:27:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:27:05.904367 | orchestrator | 2026-01-03 02:27:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:27:05.904453 | orchestrator | 2026-01-03 02:27:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:08.953884 | orchestrator | 2026-01-03 02:27:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:27:08.955794 | orchestrator | 2026-01-03 02:27:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:27:08.956119 | orchestrator | 2026-01-03 02:27:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:12.007817 | orchestrator | 2026-01-03 02:27:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:27:12.009417 | orchestrator | 2026-01-03 02:27:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:27:12.009483 | orchestrator | 2026-01-03 02:27:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:15.057700 | orchestrator | 2026-01-03 02:27:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:27:15.058447 | orchestrator | 2026-01-03 02:27:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:27:15.058667 | orchestrator | 2026-01-03 02:27:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:18.106068 | orchestrator | 2026-01-03 02:27:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:27:18.107637 | orchestrator | 2026-01-03 02:27:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:27:18.107688 | orchestrator | 2026-01-03 02:27:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:21.160005 | orchestrator | 2026-01-03 02:27:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:27:21.161292 | orchestrator | 2026-01-03 02:27:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:27:21.161318 | orchestrator | 2026-01-03 02:27:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:24.209413 | orchestrator | 2026-01-03 02:27:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:27:24.211755 | orchestrator | 2026-01-03 02:27:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:27:24.211851 | orchestrator | 2026-01-03 02:27:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:27.260476 | orchestrator | 2026-01-03 02:27:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:27:27.262146 | orchestrator | 2026-01-03 02:27:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:27:27.262230 | orchestrator | 2026-01-03 02:27:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:30.303494 | orchestrator | 2026-01-03 02:27:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:27:30.304599 | orchestrator | 2026-01-03 02:27:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:27:30.304656 | orchestrator | 2026-01-03 02:27:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:33.349851 | orchestrator | 2026-01-03 02:27:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:27:33.351421 | orchestrator | 2026-01-03 02:27:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:27:33.351524 | orchestrator | 2026-01-03 02:27:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:36.399396 | orchestrator | 2026-01-03 02:27:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:27:36.402220 | orchestrator | 2026-01-03 02:27:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:27:36.402290 | orchestrator | 2026-01-03 02:27:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:39.451712 | orchestrator | 2026-01-03 02:27:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:27:39.455852 | orchestrator | 2026-01-03 02:27:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:27:39.455919 | orchestrator | 2026-01-03 02:27:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:42.503986 | orchestrator | 2026-01-03 02:27:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:27:42.506545 | orchestrator | 2026-01-03 02:27:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:27:42.506605 | orchestrator | 2026-01-03 02:27:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:45.550799 | orchestrator | 2026-01-03 02:27:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:27:45.552578 | orchestrator | 2026-01-03 02:27:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:27:45.552636 | orchestrator | 2026-01-03 02:27:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:48.605062 | orchestrator | 2026-01-03 02:27:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:27:48.608216 | orchestrator | 2026-01-03 02:27:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:27:48.608289 | orchestrator | 2026-01-03 02:27:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:51.657389 | orchestrator | 2026-01-03 02:27:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:27:51.659252 | orchestrator | 2026-01-03 02:27:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:27:51.659327 | orchestrator | 2026-01-03 02:27:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:54.716605 | orchestrator | 2026-01-03 02:27:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:27:54.719292 | orchestrator | 2026-01-03 02:27:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:27:54.719406 | orchestrator | 2026-01-03 02:27:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:27:57.766256 | orchestrator | 2026-01-03 02:27:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:27:57.768489 | orchestrator | 2026-01-03 02:27:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:27:57.769875 | orchestrator | 2026-01-03 02:27:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:00.821868 | orchestrator | 2026-01-03 02:28:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:28:00.825599 | orchestrator | 2026-01-03 02:28:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:28:00.825707 | orchestrator | 2026-01-03 02:28:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:03.873593 | orchestrator | 2026-01-03 02:28:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:28:03.875192 | orchestrator | 2026-01-03 02:28:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:28:03.875251 | orchestrator | 2026-01-03 02:28:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:06.919822 | orchestrator | 2026-01-03 02:28:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:28:06.921617 | orchestrator | 2026-01-03 02:28:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:28:06.921707 | orchestrator | 2026-01-03 02:28:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:09.971403 | orchestrator | 2026-01-03 02:28:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:28:09.973090 | orchestrator | 2026-01-03 02:28:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:28:09.973152 | orchestrator | 2026-01-03 02:28:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:13.021706 | orchestrator | 2026-01-03 02:28:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:28:13.023637 | orchestrator | 2026-01-03 02:28:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:28:13.023689 | orchestrator | 2026-01-03 02:28:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:16.067754 | orchestrator | 2026-01-03 02:28:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:28:16.070488 | orchestrator | 2026-01-03 02:28:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:28:16.070579 | orchestrator | 2026-01-03 02:28:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:19.116233 | orchestrator | 2026-01-03 02:28:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:28:19.117400 | orchestrator | 2026-01-03 02:28:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:28:19.117524 | orchestrator | 2026-01-03 02:28:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:22.163283 | orchestrator | 2026-01-03 02:28:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:28:22.165743 | orchestrator | 2026-01-03 02:28:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:28:22.165801 | orchestrator | 2026-01-03 02:28:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:25.216673 | orchestrator | 2026-01-03 02:28:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:28:25.218865 | orchestrator | 2026-01-03 02:28:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:28:25.218947 | orchestrator | 2026-01-03 02:28:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:28.262887 | orchestrator | 2026-01-03 02:28:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:28:28.265281 | orchestrator | 2026-01-03 02:28:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:28:28.265452 | orchestrator | 2026-01-03 02:28:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:31.305707 | orchestrator | 2026-01-03 02:28:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:28:31.307675 | orchestrator | 2026-01-03 02:28:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:28:31.307711 | orchestrator | 2026-01-03 02:28:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:34.355243 | orchestrator | 2026-01-03 02:28:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:28:34.356371 | orchestrator | 2026-01-03 02:28:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:28:34.356422 | orchestrator | 2026-01-03 02:28:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:37.400454 | orchestrator | 2026-01-03 02:28:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:28:37.401928 | orchestrator | 2026-01-03 02:28:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:28:37.401983 | orchestrator | 2026-01-03 02:28:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:40.445711 | orchestrator | 2026-01-03 02:28:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:28:40.446813 | orchestrator | 2026-01-03 02:28:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:28:40.446853 | orchestrator | 2026-01-03 02:28:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:43.494927 | orchestrator | 2026-01-03 02:28:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:28:43.496252 | orchestrator | 2026-01-03 02:28:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:28:43.496390 | orchestrator | 2026-01-03 02:28:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:46.548470 | orchestrator | 2026-01-03 02:28:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:28:46.551252 | orchestrator | 2026-01-03 02:28:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:28:46.551469 | orchestrator | 2026-01-03 02:28:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:49.603842 | orchestrator | 2026-01-03 02:28:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:28:49.605698 | orchestrator | 2026-01-03 02:28:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:28:49.605733 | orchestrator | 2026-01-03 02:28:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:52.657068 | orchestrator | 2026-01-03 02:28:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:28:52.660398 | orchestrator | 2026-01-03 02:28:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:28:52.660492 | orchestrator | 2026-01-03 02:28:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:55.712891 | orchestrator | 2026-01-03 02:28:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:28:55.715069 | orchestrator | 2026-01-03 02:28:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:28:55.715164 | orchestrator | 2026-01-03 02:28:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:28:58.762585 | orchestrator | 2026-01-03 02:28:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:28:58.764152 | orchestrator | 2026-01-03 02:28:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:28:58.764210 | orchestrator | 2026-01-03 02:28:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:01.809445 | orchestrator | 2026-01-03 02:29:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:29:01.809626 | orchestrator | 2026-01-03 02:29:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:29:01.809641 | orchestrator | 2026-01-03 02:29:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:04.855219 | orchestrator | 2026-01-03 02:29:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:29:04.856799 | orchestrator | 2026-01-03 02:29:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:29:04.856859 | orchestrator | 2026-01-03 02:29:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:07.904767 | orchestrator | 2026-01-03 02:29:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:29:07.907195 | orchestrator | 2026-01-03 02:29:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:29:07.907303 | orchestrator | 2026-01-03 02:29:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:10.952496 | orchestrator | 2026-01-03 02:29:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:29:10.954382 | orchestrator | 2026-01-03 02:29:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:29:10.954464 | orchestrator | 2026-01-03 02:29:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:13.999842 | orchestrator | 2026-01-03 02:29:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:29:14.005313 | orchestrator | 2026-01-03 02:29:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:29:14.005456 | orchestrator | 2026-01-03 02:29:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:17.055440 | orchestrator | 2026-01-03 02:29:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:29:17.057172 | orchestrator | 2026-01-03 02:29:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:29:17.057239 | orchestrator | 2026-01-03 02:29:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:20.102230 | orchestrator | 2026-01-03 02:29:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:29:20.103710 | orchestrator | 2026-01-03 02:29:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:29:20.103761 | orchestrator | 2026-01-03 02:29:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:23.147724 | orchestrator | 2026-01-03 02:29:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:29:23.148861 | orchestrator | 2026-01-03 02:29:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:29:23.148907 | orchestrator | 2026-01-03 02:29:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:26.198671 | orchestrator | 2026-01-03 02:29:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:29:26.200100 | orchestrator | 2026-01-03 02:29:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:29:26.200159 | orchestrator | 2026-01-03 02:29:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:29.241533 | orchestrator | 2026-01-03 02:29:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:29:29.243041 | orchestrator | 2026-01-03 02:29:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:29:29.243085 | orchestrator | 2026-01-03 02:29:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:32.290742 | orchestrator | 2026-01-03 02:29:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:29:32.292515 | orchestrator | 2026-01-03 02:29:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:29:32.292833 | orchestrator | 2026-01-03 02:29:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:35.338807 | orchestrator | 2026-01-03 02:29:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:29:35.340805 | orchestrator | 2026-01-03 02:29:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:29:35.340868 | orchestrator | 2026-01-03 02:29:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:38.392309 | orchestrator | 2026-01-03 02:29:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:29:38.393988 | orchestrator | 2026-01-03 02:29:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:29:38.394079 | orchestrator | 2026-01-03 02:29:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:41.437940 | orchestrator | 2026-01-03 02:29:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:29:41.440558 | orchestrator | 2026-01-03 02:29:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:29:41.440657 | orchestrator | 2026-01-03 02:29:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:44.482161 | orchestrator | 2026-01-03 02:29:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:29:44.483532 | orchestrator | 2026-01-03 02:29:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:29:44.483579 | orchestrator | 2026-01-03 02:29:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:47.539747 | orchestrator | 2026-01-03 02:29:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:29:47.541131 | orchestrator | 2026-01-03 02:29:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:29:47.541249 | orchestrator | 2026-01-03 02:29:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:50.590195 | orchestrator | 2026-01-03 02:29:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:29:50.591841 | orchestrator | 2026-01-03 02:29:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:29:50.591904 | orchestrator | 2026-01-03 02:29:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:53.643661 | orchestrator | 2026-01-03 02:29:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:29:53.645160 | orchestrator | 2026-01-03 02:29:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:29:53.645234 | orchestrator | 2026-01-03 02:29:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:56.696805 | orchestrator | 2026-01-03 02:29:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:29:56.699356 | orchestrator | 2026-01-03 02:29:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:29:56.699413 | orchestrator | 2026-01-03 02:29:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:29:59.745479 | orchestrator | 2026-01-03 02:29:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:29:59.745762 | orchestrator | 2026-01-03 02:29:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:29:59.745799 | orchestrator | 2026-01-03 02:29:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:02.794684 | orchestrator | 2026-01-03 02:30:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:30:02.795965 | orchestrator | 2026-01-03 02:30:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:30:02.796001 | orchestrator | 2026-01-03 02:30:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:05.841611 | orchestrator | 2026-01-03 02:30:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:30:05.843678 | orchestrator | 2026-01-03 02:30:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:30:05.843728 | orchestrator | 2026-01-03 02:30:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:08.893717 | orchestrator | 2026-01-03 02:30:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:30:08.895930 | orchestrator | 2026-01-03 02:30:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:30:08.896020 | orchestrator | 2026-01-03 02:30:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:11.947887 | orchestrator | 2026-01-03 02:30:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:30:11.949337 | orchestrator | 2026-01-03 02:30:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:30:11.949394 | orchestrator | 2026-01-03 02:30:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:14.997967 | orchestrator | 2026-01-03 02:30:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:30:15.001229 | orchestrator | 2026-01-03 02:30:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:30:15.001370 | orchestrator | 2026-01-03 02:30:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:18.050436 | orchestrator | 2026-01-03 02:30:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:30:18.051689 | orchestrator | 2026-01-03 02:30:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:30:18.051754 | orchestrator | 2026-01-03 02:30:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:21.098287 | orchestrator | 2026-01-03 02:30:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:30:21.100743 | orchestrator | 2026-01-03 02:30:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:30:21.101235 | orchestrator | 2026-01-03 02:30:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:24.149070 | orchestrator | 2026-01-03 02:30:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:30:24.149146 | orchestrator | 2026-01-03 02:30:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:30:24.149154 | orchestrator | 2026-01-03 02:30:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:27.196790 | orchestrator | 2026-01-03 02:30:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:30:27.198475 | orchestrator | 2026-01-03 02:30:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:30:27.198556 | orchestrator | 2026-01-03 02:30:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:30.239180 | orchestrator | 2026-01-03 02:30:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:30:30.240914 | orchestrator | 2026-01-03 02:30:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:30:30.240976 | orchestrator | 2026-01-03 02:30:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:33.290475 | orchestrator | 2026-01-03 02:30:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:30:33.292041 | orchestrator | 2026-01-03 02:30:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:30:33.292058 | orchestrator | 2026-01-03 02:30:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:36.339028 | orchestrator | 2026-01-03 02:30:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:30:36.341536 | orchestrator | 2026-01-03 02:30:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:30:36.341622 | orchestrator | 2026-01-03 02:30:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:39.393823 | orchestrator | 2026-01-03 02:30:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:30:39.395399 | orchestrator | 2026-01-03 02:30:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:30:39.395455 | orchestrator | 2026-01-03 02:30:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:42.446481 | orchestrator | 2026-01-03 02:30:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:30:42.448663 | orchestrator | 2026-01-03 02:30:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:30:42.449200 | orchestrator | 2026-01-03 02:30:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:45.496690 | orchestrator | 2026-01-03 02:30:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:30:45.497712 | orchestrator | 2026-01-03 02:30:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:30:45.497838 | orchestrator | 2026-01-03 02:30:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:48.540766 | orchestrator | 2026-01-03 02:30:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:30:48.542683 | orchestrator | 2026-01-03 02:30:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:30:48.543008 | orchestrator | 2026-01-03 02:30:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:51.593006 | orchestrator | 2026-01-03 02:30:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:30:51.594662 | orchestrator | 2026-01-03 02:30:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:30:51.594706 | orchestrator | 2026-01-03 02:30:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:54.639884 | orchestrator | 2026-01-03 02:30:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:30:54.641029 | orchestrator | 2026-01-03 02:30:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:30:54.641094 | orchestrator | 2026-01-03 02:30:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:30:57.679603 | orchestrator | 2026-01-03 02:30:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:30:57.682422 | orchestrator | 2026-01-03 02:30:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:30:57.682509 | orchestrator | 2026-01-03 02:30:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:00.733795 | orchestrator | 2026-01-03 02:31:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:31:00.735505 | orchestrator | 2026-01-03 02:31:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:31:00.735558 | orchestrator | 2026-01-03 02:31:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:03.783781 | orchestrator | 2026-01-03 02:31:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:31:03.783921 | orchestrator | 2026-01-03 02:31:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:31:03.783934 | orchestrator | 2026-01-03 02:31:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:06.835007 | orchestrator | 2026-01-03 02:31:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:31:06.836646 | orchestrator | 2026-01-03 02:31:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:31:06.837234 | orchestrator | 2026-01-03 02:31:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:09.883704 | orchestrator | 2026-01-03 02:31:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:31:09.885674 | orchestrator | 2026-01-03 02:31:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:31:09.885850 | orchestrator | 2026-01-03 02:31:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:12.934300 | orchestrator | 2026-01-03 02:31:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:31:12.936658 | orchestrator | 2026-01-03 02:31:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:31:12.936814 | orchestrator | 2026-01-03 02:31:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:15.986935 | orchestrator | 2026-01-03 02:31:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:31:15.989388 | orchestrator | 2026-01-03 02:31:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:31:15.989467 | orchestrator | 2026-01-03 02:31:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:19.033066 | orchestrator | 2026-01-03 02:31:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:31:19.033305 | orchestrator | 2026-01-03 02:31:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:31:19.033533 | orchestrator | 2026-01-03 02:31:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:22.084798 | orchestrator | 2026-01-03 02:31:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:31:22.085527 | orchestrator | 2026-01-03 02:31:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:31:22.086075 | orchestrator | 2026-01-03 02:31:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:25.129007 | orchestrator | 2026-01-03 02:31:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:31:25.129156 | orchestrator | 2026-01-03 02:31:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:31:25.129169 | orchestrator | 2026-01-03 02:31:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:28.180537 | orchestrator | 2026-01-03 02:31:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:31:28.182478 | orchestrator | 2026-01-03 02:31:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:31:28.182515 | orchestrator | 2026-01-03 02:31:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:31.238488 | orchestrator | 2026-01-03 02:31:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:31:31.238581 | orchestrator | 2026-01-03 02:31:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:31:31.238589 | orchestrator | 2026-01-03 02:31:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:34.299213 | orchestrator | 2026-01-03 02:31:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:31:34.299684 | orchestrator | 2026-01-03 02:31:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:31:34.300005 | orchestrator | 2026-01-03 02:31:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:37.337086 | orchestrator | 2026-01-03 02:31:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:31:37.337373 | orchestrator | 2026-01-03 02:31:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:31:37.337399 | orchestrator | 2026-01-03 02:31:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:40.388383 | orchestrator | 2026-01-03 02:31:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:31:40.389482 | orchestrator | 2026-01-03 02:31:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:31:40.389564 | orchestrator | 2026-01-03 02:31:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:43.438762 | orchestrator | 2026-01-03 02:31:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:31:43.441202 | orchestrator | 2026-01-03 02:31:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:31:43.441275 | orchestrator | 2026-01-03 02:31:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:46.493650 | orchestrator | 2026-01-03 02:31:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:31:46.494964 | orchestrator | 2026-01-03 02:31:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:31:46.495019 | orchestrator | 2026-01-03 02:31:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:49.540867 | orchestrator | 2026-01-03 02:31:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:31:49.542153 | orchestrator | 2026-01-03 02:31:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:31:49.542202 | orchestrator | 2026-01-03 02:31:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:52.583646 | orchestrator | 2026-01-03 02:31:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:31:52.584275 | orchestrator | 2026-01-03 02:31:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:31:52.584339 | orchestrator | 2026-01-03 02:31:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:55.639114 | orchestrator | 2026-01-03 02:31:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:31:55.640382 | orchestrator | 2026-01-03 02:31:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:31:55.640496 | orchestrator | 2026-01-03 02:31:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:31:58.686748 | orchestrator | 2026-01-03 02:31:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:31:58.688353 | orchestrator | 2026-01-03 02:31:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:31:58.688493 | orchestrator | 2026-01-03 02:31:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:01.741967 | orchestrator | 2026-01-03 02:32:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:32:01.742857 | orchestrator | 2026-01-03 02:32:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:32:01.742884 | orchestrator | 2026-01-03 02:32:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:04.792449 | orchestrator | 2026-01-03 02:32:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:32:04.793841 | orchestrator | 2026-01-03 02:32:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:32:04.793878 | orchestrator | 2026-01-03 02:32:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:07.838470 | orchestrator | 2026-01-03 02:32:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:32:07.839601 | orchestrator | 2026-01-03 02:32:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:32:07.839917 | orchestrator | 2026-01-03 02:32:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:10.883904 | orchestrator | 2026-01-03 02:32:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:32:10.886210 | orchestrator | 2026-01-03 02:32:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:32:10.886529 | orchestrator | 2026-01-03 02:32:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:13.927391 | orchestrator | 2026-01-03 02:32:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:32:13.929103 | orchestrator | 2026-01-03 02:32:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:32:13.929294 | orchestrator | 2026-01-03 02:32:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:16.977491 | orchestrator | 2026-01-03 02:32:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:32:16.978199 | orchestrator | 2026-01-03 02:32:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:32:16.978235 | orchestrator | 2026-01-03 02:32:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:20.038490 | orchestrator | 2026-01-03 02:32:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:32:20.040419 | orchestrator | 2026-01-03 02:32:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:32:20.040471 | orchestrator | 2026-01-03 02:32:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:23.085702 | orchestrator | 2026-01-03 02:32:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:32:23.086974 | orchestrator | 2026-01-03 02:32:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:32:23.087018 | orchestrator | 2026-01-03 02:32:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:26.137820 | orchestrator | 2026-01-03 02:32:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:32:26.139470 | orchestrator | 2026-01-03 02:32:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:32:26.139503 | orchestrator | 2026-01-03 02:32:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:29.187778 | orchestrator | 2026-01-03 02:32:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:32:29.189153 | orchestrator | 2026-01-03 02:32:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:32:29.189210 | orchestrator | 2026-01-03 02:32:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:32.237253 | orchestrator | 2026-01-03 02:32:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:32:32.239283 | orchestrator | 2026-01-03 02:32:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:32:32.239381 | orchestrator | 2026-01-03 02:32:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:35.287622 | orchestrator | 2026-01-03 02:32:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:32:35.289488 | orchestrator | 2026-01-03 02:32:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:32:35.289560 | orchestrator | 2026-01-03 02:32:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:38.337490 | orchestrator | 2026-01-03 02:32:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:32:38.339583 | orchestrator | 2026-01-03 02:32:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:32:38.339677 | orchestrator | 2026-01-03 02:32:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:41.393614 | orchestrator | 2026-01-03 02:32:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:32:41.395880 | orchestrator | 2026-01-03 02:32:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:32:41.395929 | orchestrator | 2026-01-03 02:32:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:44.441467 | orchestrator | 2026-01-03 02:32:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:32:44.443293 | orchestrator | 2026-01-03 02:32:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:32:44.443423 | orchestrator | 2026-01-03 02:32:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:47.488977 | orchestrator | 2026-01-03 02:32:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:32:47.490235 | orchestrator | 2026-01-03 02:32:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:32:47.490294 | orchestrator | 2026-01-03 02:32:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:50.535032 | orchestrator | 2026-01-03 02:32:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:32:50.536278 | orchestrator | 2026-01-03 02:32:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:32:50.536396 | orchestrator | 2026-01-03 02:32:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:53.585764 | orchestrator | 2026-01-03 02:32:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:32:53.588688 | orchestrator | 2026-01-03 02:32:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:32:53.588850 | orchestrator | 2026-01-03 02:32:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:56.635652 | orchestrator | 2026-01-03 02:32:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:32:56.637514 | orchestrator | 2026-01-03 02:32:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:32:56.637544 | orchestrator | 2026-01-03 02:32:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:32:59.683831 | orchestrator | 2026-01-03 02:32:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:32:59.686272 | orchestrator | 2026-01-03 02:32:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:32:59.686398 | orchestrator | 2026-01-03 02:32:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:02.733948 | orchestrator | 2026-01-03 02:33:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:33:02.735240 | orchestrator | 2026-01-03 02:33:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:33:02.735301 | orchestrator | 2026-01-03 02:33:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:05.787415 | orchestrator | 2026-01-03 02:33:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:33:05.788704 | orchestrator | 2026-01-03 02:33:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:33:05.788736 | orchestrator | 2026-01-03 02:33:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:08.832590 | orchestrator | 2026-01-03 02:33:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:33:08.834508 | orchestrator | 2026-01-03 02:33:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:33:08.834563 | orchestrator | 2026-01-03 02:33:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:11.875887 | orchestrator | 2026-01-03 02:33:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:33:11.877580 | orchestrator | 2026-01-03 02:33:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:33:11.877637 | orchestrator | 2026-01-03 02:33:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:14.926517 | orchestrator | 2026-01-03 02:33:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:33:14.928296 | orchestrator | 2026-01-03 02:33:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:33:14.928436 | orchestrator | 2026-01-03 02:33:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:17.977805 | orchestrator | 2026-01-03 02:33:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:33:17.979722 | orchestrator | 2026-01-03 02:33:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:33:17.979781 | orchestrator | 2026-01-03 02:33:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:21.030060 | orchestrator | 2026-01-03 02:33:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:33:21.031735 | orchestrator | 2026-01-03 02:33:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:33:21.031778 | orchestrator | 2026-01-03 02:33:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:24.074689 | orchestrator | 2026-01-03 02:33:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:33:24.075504 | orchestrator | 2026-01-03 02:33:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:33:24.075534 | orchestrator | 2026-01-03 02:33:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:27.121419 | orchestrator | 2026-01-03 02:33:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:33:27.122934 | orchestrator | 2026-01-03 02:33:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:33:27.123187 | orchestrator | 2026-01-03 02:33:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:30.168580 | orchestrator | 2026-01-03 02:33:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:33:30.169735 | orchestrator | 2026-01-03 02:33:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:33:30.169792 | orchestrator | 2026-01-03 02:33:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:33.219553 | orchestrator | 2026-01-03 02:33:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:33:33.221113 | orchestrator | 2026-01-03 02:33:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:33:33.221153 | orchestrator | 2026-01-03 02:33:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:36.270841 | orchestrator | 2026-01-03 02:33:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:33:36.273781 | orchestrator | 2026-01-03 02:33:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:33:36.273878 | orchestrator | 2026-01-03 02:33:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:39.322612 | orchestrator | 2026-01-03 02:33:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:33:39.323792 | orchestrator | 2026-01-03 02:33:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:33:39.323921 | orchestrator | 2026-01-03 02:33:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:42.372068 | orchestrator | 2026-01-03 02:33:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:33:42.374521 | orchestrator | 2026-01-03 02:33:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:33:42.374603 | orchestrator | 2026-01-03 02:33:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:45.420836 | orchestrator | 2026-01-03 02:33:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:33:45.420920 | orchestrator | 2026-01-03 02:33:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:33:45.420932 | orchestrator | 2026-01-03 02:33:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:48.467721 | orchestrator | 2026-01-03 02:33:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:33:48.468983 | orchestrator | 2026-01-03 02:33:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:33:48.469045 | orchestrator | 2026-01-03 02:33:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:51.513380 | orchestrator | 2026-01-03 02:33:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:33:51.515491 | orchestrator | 2026-01-03 02:33:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:33:51.515577 | orchestrator | 2026-01-03 02:33:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:54.560521 | orchestrator | 2026-01-03 02:33:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:33:54.561650 | orchestrator | 2026-01-03 02:33:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:33:54.561697 | orchestrator | 2026-01-03 02:33:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:33:57.604914 | orchestrator | 2026-01-03 02:33:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:33:57.607667 | orchestrator | 2026-01-03 02:33:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:33:57.607720 | orchestrator | 2026-01-03 02:33:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:00.657181 | orchestrator | 2026-01-03 02:34:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:34:00.660526 | orchestrator | 2026-01-03 02:34:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:34:00.660643 | orchestrator | 2026-01-03 02:34:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:03.719169 | orchestrator | 2026-01-03 02:34:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:34:03.721180 | orchestrator | 2026-01-03 02:34:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:34:03.721552 | orchestrator | 2026-01-03 02:34:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:06.767797 | orchestrator | 2026-01-03 02:34:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:34:06.771893 | orchestrator | 2026-01-03 02:34:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:34:06.771959 | orchestrator | 2026-01-03 02:34:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:09.820617 | orchestrator | 2026-01-03 02:34:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:34:09.822194 | orchestrator | 2026-01-03 02:34:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:34:09.822267 | orchestrator | 2026-01-03 02:34:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:12.861013 | orchestrator | 2026-01-03 02:34:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:34:12.862075 | orchestrator | 2026-01-03 02:34:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:34:12.862102 | orchestrator | 2026-01-03 02:34:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:15.912057 | orchestrator | 2026-01-03 02:34:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:34:15.913521 | orchestrator | 2026-01-03 02:34:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:34:15.913552 | orchestrator | 2026-01-03 02:34:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:18.968993 | orchestrator | 2026-01-03 02:34:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:34:18.971787 | orchestrator | 2026-01-03 02:34:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:34:18.971869 | orchestrator | 2026-01-03 02:34:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:22.016617 | orchestrator | 2026-01-03 02:34:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:34:22.016733 | orchestrator | 2026-01-03 02:34:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:34:22.016773 | orchestrator | 2026-01-03 02:34:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:25.066542 | orchestrator | 2026-01-03 02:34:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:34:25.068766 | orchestrator | 2026-01-03 02:34:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:34:25.068822 | orchestrator | 2026-01-03 02:34:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:28.115520 | orchestrator | 2026-01-03 02:34:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:34:28.116868 | orchestrator | 2026-01-03 02:34:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:34:28.116926 | orchestrator | 2026-01-03 02:34:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:31.163511 | orchestrator | 2026-01-03 02:34:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:34:31.165719 | orchestrator | 2026-01-03 02:34:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:34:31.165791 | orchestrator | 2026-01-03 02:34:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:34.213631 | orchestrator | 2026-01-03 02:34:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:34:34.215362 | orchestrator | 2026-01-03 02:34:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:34:34.215426 | orchestrator | 2026-01-03 02:34:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:37.262252 | orchestrator | 2026-01-03 02:34:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:34:37.264138 | orchestrator | 2026-01-03 02:34:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:34:37.264205 | orchestrator | 2026-01-03 02:34:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:40.309797 | orchestrator | 2026-01-03 02:34:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:34:40.311553 | orchestrator | 2026-01-03 02:34:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:34:40.311600 | orchestrator | 2026-01-03 02:34:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:43.359905 | orchestrator | 2026-01-03 02:34:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:34:43.361521 | orchestrator | 2026-01-03 02:34:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:34:43.361587 | orchestrator | 2026-01-03 02:34:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:46.407132 | orchestrator | 2026-01-03 02:34:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:34:46.409711 | orchestrator | 2026-01-03 02:34:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:34:46.409810 | orchestrator | 2026-01-03 02:34:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:49.459694 | orchestrator | 2026-01-03 02:34:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:34:49.461385 | orchestrator | 2026-01-03 02:34:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:34:49.461526 | orchestrator | 2026-01-03 02:34:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:52.510450 | orchestrator | 2026-01-03 02:34:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:34:52.511447 | orchestrator | 2026-01-03 02:34:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:34:52.511480 | orchestrator | 2026-01-03 02:34:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:55.554603 | orchestrator | 2026-01-03 02:34:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:34:55.556519 | orchestrator | 2026-01-03 02:34:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:34:55.556566 | orchestrator | 2026-01-03 02:34:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:34:58.598584 | orchestrator | 2026-01-03 02:34:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:34:58.600721 | orchestrator | 2026-01-03 02:34:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:34:58.600821 | orchestrator | 2026-01-03 02:34:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:01.652929 | orchestrator | 2026-01-03 02:35:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:35:01.655625 | orchestrator | 2026-01-03 02:35:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:35:01.655714 | orchestrator | 2026-01-03 02:35:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:04.696539 | orchestrator | 2026-01-03 02:35:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:35:04.699517 | orchestrator | 2026-01-03 02:35:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:35:04.699615 | orchestrator | 2026-01-03 02:35:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:07.746703 | orchestrator | 2026-01-03 02:35:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:35:07.747968 | orchestrator | 2026-01-03 02:35:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:35:07.748013 | orchestrator | 2026-01-03 02:35:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:10.793022 | orchestrator | 2026-01-03 02:35:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:35:10.793733 | orchestrator | 2026-01-03 02:35:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:35:10.793964 | orchestrator | 2026-01-03 02:35:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:13.839656 | orchestrator | 2026-01-03 02:35:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:35:13.841079 | orchestrator | 2026-01-03 02:35:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:35:13.841726 | orchestrator | 2026-01-03 02:35:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:16.885146 | orchestrator | 2026-01-03 02:35:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:35:16.887911 | orchestrator | 2026-01-03 02:35:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:35:16.887963 | orchestrator | 2026-01-03 02:35:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:19.937508 | orchestrator | 2026-01-03 02:35:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:35:19.940292 | orchestrator | 2026-01-03 02:35:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:35:19.940419 | orchestrator | 2026-01-03 02:35:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:22.988263 | orchestrator | 2026-01-03 02:35:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:35:22.988451 | orchestrator | 2026-01-03 02:35:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:35:22.988467 | orchestrator | 2026-01-03 02:35:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:26.052850 | orchestrator | 2026-01-03 02:35:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:35:26.054107 | orchestrator | 2026-01-03 02:35:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:35:26.054138 | orchestrator | 2026-01-03 02:35:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:29.102558 | orchestrator | 2026-01-03 02:35:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:35:29.105969 | orchestrator | 2026-01-03 02:35:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:35:29.106131 | orchestrator | 2026-01-03 02:35:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:32.153954 | orchestrator | 2026-01-03 02:35:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:35:32.156620 | orchestrator | 2026-01-03 02:35:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:35:32.156675 | orchestrator | 2026-01-03 02:35:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:35.203218 | orchestrator | 2026-01-03 02:35:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:35:35.205352 | orchestrator | 2026-01-03 02:35:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:35:35.205434 | orchestrator | 2026-01-03 02:35:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:38.258077 | orchestrator | 2026-01-03 02:35:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:35:38.259971 | orchestrator | 2026-01-03 02:35:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:35:38.260229 | orchestrator | 2026-01-03 02:35:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:41.307176 | orchestrator | 2026-01-03 02:35:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:35:41.309035 | orchestrator | 2026-01-03 02:35:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:35:41.309076 | orchestrator | 2026-01-03 02:35:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:44.354771 | orchestrator | 2026-01-03 02:35:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:35:44.357322 | orchestrator | 2026-01-03 02:35:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:35:44.357395 | orchestrator | 2026-01-03 02:35:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:47.412107 | orchestrator | 2026-01-03 02:35:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:35:47.414394 | orchestrator | 2026-01-03 02:35:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:35:47.414551 | orchestrator | 2026-01-03 02:35:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:50.462911 | orchestrator | 2026-01-03 02:35:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:35:50.464839 | orchestrator | 2026-01-03 02:35:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:35:50.464917 | orchestrator | 2026-01-03 02:35:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:53.515403 | orchestrator | 2026-01-03 02:35:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:35:53.516905 | orchestrator | 2026-01-03 02:35:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:35:53.517001 | orchestrator | 2026-01-03 02:35:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:56.562171 | orchestrator | 2026-01-03 02:35:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:35:56.563838 | orchestrator | 2026-01-03 02:35:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:35:56.563896 | orchestrator | 2026-01-03 02:35:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:35:59.609138 | orchestrator | 2026-01-03 02:35:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:35:59.611087 | orchestrator | 2026-01-03 02:35:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:35:59.611153 | orchestrator | 2026-01-03 02:35:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:02.657461 | orchestrator | 2026-01-03 02:36:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:36:02.659206 | orchestrator | 2026-01-03 02:36:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:36:02.659361 | orchestrator | 2026-01-03 02:36:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:05.700802 | orchestrator | 2026-01-03 02:36:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:36:05.704120 | orchestrator | 2026-01-03 02:36:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:36:05.704200 | orchestrator | 2026-01-03 02:36:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:08.756704 | orchestrator | 2026-01-03 02:36:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:36:08.758798 | orchestrator | 2026-01-03 02:36:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:36:08.758887 | orchestrator | 2026-01-03 02:36:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:11.809568 | orchestrator | 2026-01-03 02:36:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:36:11.812001 | orchestrator | 2026-01-03 02:36:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:36:11.812100 | orchestrator | 2026-01-03 02:36:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:14.856370 | orchestrator | 2026-01-03 02:36:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:36:14.859719 | orchestrator | 2026-01-03 02:36:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:36:14.859786 | orchestrator | 2026-01-03 02:36:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:17.914958 | orchestrator | 2026-01-03 02:36:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:36:17.916778 | orchestrator | 2026-01-03 02:36:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:36:17.916834 | orchestrator | 2026-01-03 02:36:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:20.963981 | orchestrator | 2026-01-03 02:36:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:36:20.965469 | orchestrator | 2026-01-03 02:36:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:36:20.965622 | orchestrator | 2026-01-03 02:36:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:24.011915 | orchestrator | 2026-01-03 02:36:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:36:24.013055 | orchestrator | 2026-01-03 02:36:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:36:24.013114 | orchestrator | 2026-01-03 02:36:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:27.061996 | orchestrator | 2026-01-03 02:36:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:36:27.062237 | orchestrator | 2026-01-03 02:36:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:36:27.062470 | orchestrator | 2026-01-03 02:36:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:30.107748 | orchestrator | 2026-01-03 02:36:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:36:30.109915 | orchestrator | 2026-01-03 02:36:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:36:30.110006 | orchestrator | 2026-01-03 02:36:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:33.168820 | orchestrator | 2026-01-03 02:36:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:36:33.169183 | orchestrator | 2026-01-03 02:36:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:36:33.169448 | orchestrator | 2026-01-03 02:36:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:36.222506 | orchestrator | 2026-01-03 02:36:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:36:36.223937 | orchestrator | 2026-01-03 02:36:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:36:36.224079 | orchestrator | 2026-01-03 02:36:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:39.273861 | orchestrator | 2026-01-03 02:36:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:36:39.274554 | orchestrator | 2026-01-03 02:36:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:36:39.274592 | orchestrator | 2026-01-03 02:36:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:42.322863 | orchestrator | 2026-01-03 02:36:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:36:42.324477 | orchestrator | 2026-01-03 02:36:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:36:42.324567 | orchestrator | 2026-01-03 02:36:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:45.375003 | orchestrator | 2026-01-03 02:36:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:36:45.375120 | orchestrator | 2026-01-03 02:36:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:36:45.375130 | orchestrator | 2026-01-03 02:36:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:48.423947 | orchestrator | 2026-01-03 02:36:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:36:48.425732 | orchestrator | 2026-01-03 02:36:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:36:48.425905 | orchestrator | 2026-01-03 02:36:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:51.472777 | orchestrator | 2026-01-03 02:36:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:36:51.474627 | orchestrator | 2026-01-03 02:36:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:36:51.474679 | orchestrator | 2026-01-03 02:36:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:54.522802 | orchestrator | 2026-01-03 02:36:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:36:54.525319 | orchestrator | 2026-01-03 02:36:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:36:54.525379 | orchestrator | 2026-01-03 02:36:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:36:57.576823 | orchestrator | 2026-01-03 02:36:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:36:57.579015 | orchestrator | 2026-01-03 02:36:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:36:57.579157 | orchestrator | 2026-01-03 02:36:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:00.624470 | orchestrator | 2026-01-03 02:37:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:37:00.626941 | orchestrator | 2026-01-03 02:37:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:37:00.627014 | orchestrator | 2026-01-03 02:37:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:03.667696 | orchestrator | 2026-01-03 02:37:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:37:03.669952 | orchestrator | 2026-01-03 02:37:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:37:03.670072 | orchestrator | 2026-01-03 02:37:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:06.714890 | orchestrator | 2026-01-03 02:37:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:37:06.717443 | orchestrator | 2026-01-03 02:37:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:37:06.717902 | orchestrator | 2026-01-03 02:37:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:09.759069 | orchestrator | 2026-01-03 02:37:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:37:09.760889 | orchestrator | 2026-01-03 02:37:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:37:09.760968 | orchestrator | 2026-01-03 02:37:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:12.805776 | orchestrator | 2026-01-03 02:37:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:37:12.806882 | orchestrator | 2026-01-03 02:37:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:37:12.806963 | orchestrator | 2026-01-03 02:37:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:15.859573 | orchestrator | 2026-01-03 02:37:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:37:15.861266 | orchestrator | 2026-01-03 02:37:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:37:15.861979 | orchestrator | 2026-01-03 02:37:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:18.909126 | orchestrator | 2026-01-03 02:37:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:37:18.910673 | orchestrator | 2026-01-03 02:37:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:37:18.910731 | orchestrator | 2026-01-03 02:37:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:21.958094 | orchestrator | 2026-01-03 02:37:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:37:21.962525 | orchestrator | 2026-01-03 02:37:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:37:21.962603 | orchestrator | 2026-01-03 02:37:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:25.028110 | orchestrator | 2026-01-03 02:37:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:37:25.030643 | orchestrator | 2026-01-03 02:37:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:37:25.030742 | orchestrator | 2026-01-03 02:37:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:28.074963 | orchestrator | 2026-01-03 02:37:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:37:28.077786 | orchestrator | 2026-01-03 02:37:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:37:28.077852 | orchestrator | 2026-01-03 02:37:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:31.121398 | orchestrator | 2026-01-03 02:37:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:37:31.123432 | orchestrator | 2026-01-03 02:37:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:37:31.123484 | orchestrator | 2026-01-03 02:37:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:34.166001 | orchestrator | 2026-01-03 02:37:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:37:34.168018 | orchestrator | 2026-01-03 02:37:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:37:34.168064 | orchestrator | 2026-01-03 02:37:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:37.216631 | orchestrator | 2026-01-03 02:37:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:37:37.218967 | orchestrator | 2026-01-03 02:37:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:37:37.219109 | orchestrator | 2026-01-03 02:37:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:40.262930 | orchestrator | 2026-01-03 02:37:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:37:40.264207 | orchestrator | 2026-01-03 02:37:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:37:40.264251 | orchestrator | 2026-01-03 02:37:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:43.317853 | orchestrator | 2026-01-03 02:37:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:37:43.319333 | orchestrator | 2026-01-03 02:37:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:37:43.319668 | orchestrator | 2026-01-03 02:37:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:46.372613 | orchestrator | 2026-01-03 02:37:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:37:46.375514 | orchestrator | 2026-01-03 02:37:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:37:46.375571 | orchestrator | 2026-01-03 02:37:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:49.418135 | orchestrator | 2026-01-03 02:37:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:37:49.420154 | orchestrator | 2026-01-03 02:37:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:37:49.420200 | orchestrator | 2026-01-03 02:37:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:52.467221 | orchestrator | 2026-01-03 02:37:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:37:52.469469 | orchestrator | 2026-01-03 02:37:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:37:52.469826 | orchestrator | 2026-01-03 02:37:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:55.519911 | orchestrator | 2026-01-03 02:37:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:37:55.521868 | orchestrator | 2026-01-03 02:37:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:37:55.521972 | orchestrator | 2026-01-03 02:37:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:37:58.568200 | orchestrator | 2026-01-03 02:37:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:37:58.568959 | orchestrator | 2026-01-03 02:37:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:37:58.568990 | orchestrator | 2026-01-03 02:37:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:01.612666 | orchestrator | 2026-01-03 02:38:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:38:01.613876 | orchestrator | 2026-01-03 02:38:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:38:01.613999 | orchestrator | 2026-01-03 02:38:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:04.657529 | orchestrator | 2026-01-03 02:38:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:38:04.658989 | orchestrator | 2026-01-03 02:38:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:38:04.659023 | orchestrator | 2026-01-03 02:38:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:07.702159 | orchestrator | 2026-01-03 02:38:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:38:07.704381 | orchestrator | 2026-01-03 02:38:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:38:07.704465 | orchestrator | 2026-01-03 02:38:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:10.752192 | orchestrator | 2026-01-03 02:38:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:38:10.753602 | orchestrator | 2026-01-03 02:38:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:38:10.753675 | orchestrator | 2026-01-03 02:38:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:13.799887 | orchestrator | 2026-01-03 02:38:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:38:13.802842 | orchestrator | 2026-01-03 02:38:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:38:13.802907 | orchestrator | 2026-01-03 02:38:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:16.854559 | orchestrator | 2026-01-03 02:38:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:38:16.857422 | orchestrator | 2026-01-03 02:38:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:38:16.857487 | orchestrator | 2026-01-03 02:38:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:19.904691 | orchestrator | 2026-01-03 02:38:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:38:19.905576 | orchestrator | 2026-01-03 02:38:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:38:19.905620 | orchestrator | 2026-01-03 02:38:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:22.953312 | orchestrator | 2026-01-03 02:38:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:38:22.954874 | orchestrator | 2026-01-03 02:38:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:38:22.954925 | orchestrator | 2026-01-03 02:38:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:25.998593 | orchestrator | 2026-01-03 02:38:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:38:26.000319 | orchestrator | 2026-01-03 02:38:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:38:26.000405 | orchestrator | 2026-01-03 02:38:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:29.046281 | orchestrator | 2026-01-03 02:38:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:38:29.047731 | orchestrator | 2026-01-03 02:38:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:38:29.047781 | orchestrator | 2026-01-03 02:38:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:32.087031 | orchestrator | 2026-01-03 02:38:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:38:32.087928 | orchestrator | 2026-01-03 02:38:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:38:32.088007 | orchestrator | 2026-01-03 02:38:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:35.135823 | orchestrator | 2026-01-03 02:38:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:38:35.137839 | orchestrator | 2026-01-03 02:38:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:38:35.137898 | orchestrator | 2026-01-03 02:38:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:38.182885 | orchestrator | 2026-01-03 02:38:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:38:38.185659 | orchestrator | 2026-01-03 02:38:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:38:38.185723 | orchestrator | 2026-01-03 02:38:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:41.237463 | orchestrator | 2026-01-03 02:38:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:38:41.239149 | orchestrator | 2026-01-03 02:38:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:38:41.239178 | orchestrator | 2026-01-03 02:38:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:44.289991 | orchestrator | 2026-01-03 02:38:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:38:44.292975 | orchestrator | 2026-01-03 02:38:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:38:44.293061 | orchestrator | 2026-01-03 02:38:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:47.345484 | orchestrator | 2026-01-03 02:38:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:38:47.346876 | orchestrator | 2026-01-03 02:38:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:38:47.346958 | orchestrator | 2026-01-03 02:38:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:50.393095 | orchestrator | 2026-01-03 02:38:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:38:50.393650 | orchestrator | 2026-01-03 02:38:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:38:50.393673 | orchestrator | 2026-01-03 02:38:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:53.443165 | orchestrator | 2026-01-03 02:38:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:38:53.445229 | orchestrator | 2026-01-03 02:38:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:38:53.445323 | orchestrator | 2026-01-03 02:38:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:56.493024 | orchestrator | 2026-01-03 02:38:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:38:56.494944 | orchestrator | 2026-01-03 02:38:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:38:56.494985 | orchestrator | 2026-01-03 02:38:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:38:59.539839 | orchestrator | 2026-01-03 02:38:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:38:59.541580 | orchestrator | 2026-01-03 02:38:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:38:59.541886 | orchestrator | 2026-01-03 02:38:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:02.583536 | orchestrator | 2026-01-03 02:39:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:39:02.585685 | orchestrator | 2026-01-03 02:39:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:39:02.585807 | orchestrator | 2026-01-03 02:39:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:05.633057 | orchestrator | 2026-01-03 02:39:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:39:05.634755 | orchestrator | 2026-01-03 02:39:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:39:05.634837 | orchestrator | 2026-01-03 02:39:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:08.683272 | orchestrator | 2026-01-03 02:39:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:39:08.685330 | orchestrator | 2026-01-03 02:39:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:39:08.685457 | orchestrator | 2026-01-03 02:39:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:11.735099 | orchestrator | 2026-01-03 02:39:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:39:11.737067 | orchestrator | 2026-01-03 02:39:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:39:11.737110 | orchestrator | 2026-01-03 02:39:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:14.781124 | orchestrator | 2026-01-03 02:39:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:39:14.782797 | orchestrator | 2026-01-03 02:39:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:39:14.782889 | orchestrator | 2026-01-03 02:39:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:17.830176 | orchestrator | 2026-01-03 02:39:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:39:17.832728 | orchestrator | 2026-01-03 02:39:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:39:17.832779 | orchestrator | 2026-01-03 02:39:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:20.882464 | orchestrator | 2026-01-03 02:39:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:39:20.883470 | orchestrator | 2026-01-03 02:39:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:39:20.883521 | orchestrator | 2026-01-03 02:39:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:23.933784 | orchestrator | 2026-01-03 02:39:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:39:23.936125 | orchestrator | 2026-01-03 02:39:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:39:23.936191 | orchestrator | 2026-01-03 02:39:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:26.982530 | orchestrator | 2026-01-03 02:39:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:39:26.984582 | orchestrator | 2026-01-03 02:39:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:39:26.984646 | orchestrator | 2026-01-03 02:39:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:30.041684 | orchestrator | 2026-01-03 02:39:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:39:30.043380 | orchestrator | 2026-01-03 02:39:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:39:30.043579 | orchestrator | 2026-01-03 02:39:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:33.093332 | orchestrator | 2026-01-03 02:39:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:39:33.094180 | orchestrator | 2026-01-03 02:39:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:39:33.094231 | orchestrator | 2026-01-03 02:39:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:36.149304 | orchestrator | 2026-01-03 02:39:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:39:36.151808 | orchestrator | 2026-01-03 02:39:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:39:36.151904 | orchestrator | 2026-01-03 02:39:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:39.192872 | orchestrator | 2026-01-03 02:39:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:39:39.194882 | orchestrator | 2026-01-03 02:39:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:39:39.194930 | orchestrator | 2026-01-03 02:39:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:42.245088 | orchestrator | 2026-01-03 02:39:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:39:42.247837 | orchestrator | 2026-01-03 02:39:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:39:42.248009 | orchestrator | 2026-01-03 02:39:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:45.294149 | orchestrator | 2026-01-03 02:39:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:39:45.296871 | orchestrator | 2026-01-03 02:39:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:39:45.296952 | orchestrator | 2026-01-03 02:39:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:48.344582 | orchestrator | 2026-01-03 02:39:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:39:48.346529 | orchestrator | 2026-01-03 02:39:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:39:48.346580 | orchestrator | 2026-01-03 02:39:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:51.396954 | orchestrator | 2026-01-03 02:39:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:39:51.398590 | orchestrator | 2026-01-03 02:39:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:39:51.398655 | orchestrator | 2026-01-03 02:39:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:54.442994 | orchestrator | 2026-01-03 02:39:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:39:54.444757 | orchestrator | 2026-01-03 02:39:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:39:54.444824 | orchestrator | 2026-01-03 02:39:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:39:57.486372 | orchestrator | 2026-01-03 02:39:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:39:57.488132 | orchestrator | 2026-01-03 02:39:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:39:57.488166 | orchestrator | 2026-01-03 02:39:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:00.532009 | orchestrator | 2026-01-03 02:40:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:40:00.535626 | orchestrator | 2026-01-03 02:40:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:40:00.535731 | orchestrator | 2026-01-03 02:40:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:03.579127 | orchestrator | 2026-01-03 02:40:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:40:03.581172 | orchestrator | 2026-01-03 02:40:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:40:03.581363 | orchestrator | 2026-01-03 02:40:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:06.625497 | orchestrator | 2026-01-03 02:40:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:40:06.626646 | orchestrator | 2026-01-03 02:40:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:40:06.626696 | orchestrator | 2026-01-03 02:40:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:09.676774 | orchestrator | 2026-01-03 02:40:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:40:09.678961 | orchestrator | 2026-01-03 02:40:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:40:09.679290 | orchestrator | 2026-01-03 02:40:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:12.720618 | orchestrator | 2026-01-03 02:40:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:40:12.722841 | orchestrator | 2026-01-03 02:40:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:40:12.722907 | orchestrator | 2026-01-03 02:40:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:15.770148 | orchestrator | 2026-01-03 02:40:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:40:15.772001 | orchestrator | 2026-01-03 02:40:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:40:15.772127 | orchestrator | 2026-01-03 02:40:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:18.817120 | orchestrator | 2026-01-03 02:40:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:40:18.819258 | orchestrator | 2026-01-03 02:40:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:40:18.819306 | orchestrator | 2026-01-03 02:40:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:21.867358 | orchestrator | 2026-01-03 02:40:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:40:21.868517 | orchestrator | 2026-01-03 02:40:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:40:21.868630 | orchestrator | 2026-01-03 02:40:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:24.915117 | orchestrator | 2026-01-03 02:40:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:40:24.917277 | orchestrator | 2026-01-03 02:40:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:40:24.917336 | orchestrator | 2026-01-03 02:40:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:27.966995 | orchestrator | 2026-01-03 02:40:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:40:27.968978 | orchestrator | 2026-01-03 02:40:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:40:27.969857 | orchestrator | 2026-01-03 02:40:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:31.015933 | orchestrator | 2026-01-03 02:40:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:40:31.017052 | orchestrator | 2026-01-03 02:40:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:40:31.017104 | orchestrator | 2026-01-03 02:40:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:34.064251 | orchestrator | 2026-01-03 02:40:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:40:34.065829 | orchestrator | 2026-01-03 02:40:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:40:34.065864 | orchestrator | 2026-01-03 02:40:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:37.117099 | orchestrator | 2026-01-03 02:40:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:40:37.118871 | orchestrator | 2026-01-03 02:40:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:40:37.118931 | orchestrator | 2026-01-03 02:40:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:40.149411 | orchestrator | 2026-01-03 02:40:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:40:40.150747 | orchestrator | 2026-01-03 02:40:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:40:40.150793 | orchestrator | 2026-01-03 02:40:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:43.193304 | orchestrator | 2026-01-03 02:40:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:40:43.195199 | orchestrator | 2026-01-03 02:40:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:40:43.195234 | orchestrator | 2026-01-03 02:40:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:46.242383 | orchestrator | 2026-01-03 02:40:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:40:46.244572 | orchestrator | 2026-01-03 02:40:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:40:46.244634 | orchestrator | 2026-01-03 02:40:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:49.291724 | orchestrator | 2026-01-03 02:40:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:40:49.291796 | orchestrator | 2026-01-03 02:40:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:40:49.291803 | orchestrator | 2026-01-03 02:40:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:52.333747 | orchestrator | 2026-01-03 02:40:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:40:52.336117 | orchestrator | 2026-01-03 02:40:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:40:52.336190 | orchestrator | 2026-01-03 02:40:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:55.383202 | orchestrator | 2026-01-03 02:40:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:40:55.386368 | orchestrator | 2026-01-03 02:40:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:40:55.386883 | orchestrator | 2026-01-03 02:40:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:40:58.435581 | orchestrator | 2026-01-03 02:40:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:40:58.437655 | orchestrator | 2026-01-03 02:40:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:40:58.437713 | orchestrator | 2026-01-03 02:40:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:01.486293 | orchestrator | 2026-01-03 02:41:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:41:01.490499 | orchestrator | 2026-01-03 02:41:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:41:01.491245 | orchestrator | 2026-01-03 02:41:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:04.540563 | orchestrator | 2026-01-03 02:41:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:41:04.542960 | orchestrator | 2026-01-03 02:41:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:41:04.543022 | orchestrator | 2026-01-03 02:41:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:07.589150 | orchestrator | 2026-01-03 02:41:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:41:07.591038 | orchestrator | 2026-01-03 02:41:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:41:07.591152 | orchestrator | 2026-01-03 02:41:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:10.643752 | orchestrator | 2026-01-03 02:41:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:41:10.645958 | orchestrator | 2026-01-03 02:41:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:41:10.646094 | orchestrator | 2026-01-03 02:41:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:13.690688 | orchestrator | 2026-01-03 02:41:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:41:13.692334 | orchestrator | 2026-01-03 02:41:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:41:13.692367 | orchestrator | 2026-01-03 02:41:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:16.743177 | orchestrator | 2026-01-03 02:41:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:41:16.745344 | orchestrator | 2026-01-03 02:41:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:41:16.745430 | orchestrator | 2026-01-03 02:41:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:19.791137 | orchestrator | 2026-01-03 02:41:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:41:19.791843 | orchestrator | 2026-01-03 02:41:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:41:19.791880 | orchestrator | 2026-01-03 02:41:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:22.838346 | orchestrator | 2026-01-03 02:41:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:41:22.839848 | orchestrator | 2026-01-03 02:41:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:41:22.839942 | orchestrator | 2026-01-03 02:41:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:25.880549 | orchestrator | 2026-01-03 02:41:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:41:25.883025 | orchestrator | 2026-01-03 02:41:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:41:25.883094 | orchestrator | 2026-01-03 02:41:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:28.930067 | orchestrator | 2026-01-03 02:41:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:41:28.932769 | orchestrator | 2026-01-03 02:41:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:41:28.932836 | orchestrator | 2026-01-03 02:41:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:31.983060 | orchestrator | 2026-01-03 02:41:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:41:31.984409 | orchestrator | 2026-01-03 02:41:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:41:31.984458 | orchestrator | 2026-01-03 02:41:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:35.025228 | orchestrator | 2026-01-03 02:41:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:41:35.027234 | orchestrator | 2026-01-03 02:41:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:41:35.027294 | orchestrator | 2026-01-03 02:41:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:38.072622 | orchestrator | 2026-01-03 02:41:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:41:38.072741 | orchestrator | 2026-01-03 02:41:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:41:38.072752 | orchestrator | 2026-01-03 02:41:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:41.116907 | orchestrator | 2026-01-03 02:41:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:41:41.118828 | orchestrator | 2026-01-03 02:41:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:41:41.118890 | orchestrator | 2026-01-03 02:41:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:44.159194 | orchestrator | 2026-01-03 02:41:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:41:44.161465 | orchestrator | 2026-01-03 02:41:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:41:44.161572 | orchestrator | 2026-01-03 02:41:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:47.207895 | orchestrator | 2026-01-03 02:41:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:41:47.209403 | orchestrator | 2026-01-03 02:41:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:41:47.209458 | orchestrator | 2026-01-03 02:41:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:50.268899 | orchestrator | 2026-01-03 02:41:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:41:50.270690 | orchestrator | 2026-01-03 02:41:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:41:50.271416 | orchestrator | 2026-01-03 02:41:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:53.320276 | orchestrator | 2026-01-03 02:41:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:41:53.322688 | orchestrator | 2026-01-03 02:41:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:41:53.322869 | orchestrator | 2026-01-03 02:41:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:56.376881 | orchestrator | 2026-01-03 02:41:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:41:56.380689 | orchestrator | 2026-01-03 02:41:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:41:56.381193 | orchestrator | 2026-01-03 02:41:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:41:59.432849 | orchestrator | 2026-01-03 02:41:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:41:59.432943 | orchestrator | 2026-01-03 02:41:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:41:59.432955 | orchestrator | 2026-01-03 02:41:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:02.479914 | orchestrator | 2026-01-03 02:42:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:42:02.481629 | orchestrator | 2026-01-03 02:42:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:42:02.481828 | orchestrator | 2026-01-03 02:42:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:05.537654 | orchestrator | 2026-01-03 02:42:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:42:05.539845 | orchestrator | 2026-01-03 02:42:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:42:05.539955 | orchestrator | 2026-01-03 02:42:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:08.590989 | orchestrator | 2026-01-03 02:42:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:42:08.592848 | orchestrator | 2026-01-03 02:42:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:42:08.592941 | orchestrator | 2026-01-03 02:42:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:11.646336 | orchestrator | 2026-01-03 02:42:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:42:11.647665 | orchestrator | 2026-01-03 02:42:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:42:11.647732 | orchestrator | 2026-01-03 02:42:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:14.692414 | orchestrator | 2026-01-03 02:42:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:42:14.694237 | orchestrator | 2026-01-03 02:42:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:42:14.694334 | orchestrator | 2026-01-03 02:42:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:17.748404 | orchestrator | 2026-01-03 02:42:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:42:17.748509 | orchestrator | 2026-01-03 02:42:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:42:17.748521 | orchestrator | 2026-01-03 02:42:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:20.799481 | orchestrator | 2026-01-03 02:42:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:42:20.801512 | orchestrator | 2026-01-03 02:42:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:42:20.801546 | orchestrator | 2026-01-03 02:42:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:23.846970 | orchestrator | 2026-01-03 02:42:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:42:23.849555 | orchestrator | 2026-01-03 02:42:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:42:23.849629 | orchestrator | 2026-01-03 02:42:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:26.903347 | orchestrator | 2026-01-03 02:42:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:42:26.907594 | orchestrator | 2026-01-03 02:42:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:42:26.907658 | orchestrator | 2026-01-03 02:42:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:29.955401 | orchestrator | 2026-01-03 02:42:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:42:29.960129 | orchestrator | 2026-01-03 02:42:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:42:29.960685 | orchestrator | 2026-01-03 02:42:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:33.013410 | orchestrator | 2026-01-03 02:42:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:42:33.016033 | orchestrator | 2026-01-03 02:42:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:42:33.016117 | orchestrator | 2026-01-03 02:42:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:36.061511 | orchestrator | 2026-01-03 02:42:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:42:36.062879 | orchestrator | 2026-01-03 02:42:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:42:36.062910 | orchestrator | 2026-01-03 02:42:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:39.111535 | orchestrator | 2026-01-03 02:42:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:42:39.113026 | orchestrator | 2026-01-03 02:42:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:42:39.113454 | orchestrator | 2026-01-03 02:42:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:42.169000 | orchestrator | 2026-01-03 02:42:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:42:42.170999 | orchestrator | 2026-01-03 02:42:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:42:42.171068 | orchestrator | 2026-01-03 02:42:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:45.214157 | orchestrator | 2026-01-03 02:42:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:42:45.217851 | orchestrator | 2026-01-03 02:42:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:42:45.217957 | orchestrator | 2026-01-03 02:42:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:48.265459 | orchestrator | 2026-01-03 02:42:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:42:48.266731 | orchestrator | 2026-01-03 02:42:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:42:48.266770 | orchestrator | 2026-01-03 02:42:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:51.311127 | orchestrator | 2026-01-03 02:42:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:42:51.312935 | orchestrator | 2026-01-03 02:42:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:42:51.313000 | orchestrator | 2026-01-03 02:42:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:54.365302 | orchestrator | 2026-01-03 02:42:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:42:54.367124 | orchestrator | 2026-01-03 02:42:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:42:54.367178 | orchestrator | 2026-01-03 02:42:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:42:57.416792 | orchestrator | 2026-01-03 02:42:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:42:57.419396 | orchestrator | 2026-01-03 02:42:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:42:57.419450 | orchestrator | 2026-01-03 02:42:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:00.466337 | orchestrator | 2026-01-03 02:43:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:43:00.468481 | orchestrator | 2026-01-03 02:43:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:43:00.468525 | orchestrator | 2026-01-03 02:43:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:03.516430 | orchestrator | 2026-01-03 02:43:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:43:03.518205 | orchestrator | 2026-01-03 02:43:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:43:03.518266 | orchestrator | 2026-01-03 02:43:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:06.561809 | orchestrator | 2026-01-03 02:43:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:43:06.562611 | orchestrator | 2026-01-03 02:43:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:43:06.562707 | orchestrator | 2026-01-03 02:43:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:09.609116 | orchestrator | 2026-01-03 02:43:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:43:09.610419 | orchestrator | 2026-01-03 02:43:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:43:09.610563 | orchestrator | 2026-01-03 02:43:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:12.659638 | orchestrator | 2026-01-03 02:43:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:43:12.661019 | orchestrator | 2026-01-03 02:43:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:43:12.661070 | orchestrator | 2026-01-03 02:43:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:15.704444 | orchestrator | 2026-01-03 02:43:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:43:15.706807 | orchestrator | 2026-01-03 02:43:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:43:15.706878 | orchestrator | 2026-01-03 02:43:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:18.756960 | orchestrator | 2026-01-03 02:43:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:43:18.758787 | orchestrator | 2026-01-03 02:43:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:43:18.758838 | orchestrator | 2026-01-03 02:43:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:21.801515 | orchestrator | 2026-01-03 02:43:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:43:21.803501 | orchestrator | 2026-01-03 02:43:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:43:21.803554 | orchestrator | 2026-01-03 02:43:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:24.847896 | orchestrator | 2026-01-03 02:43:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:43:24.851165 | orchestrator | 2026-01-03 02:43:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:43:24.851247 | orchestrator | 2026-01-03 02:43:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:27.903314 | orchestrator | 2026-01-03 02:43:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:43:27.906266 | orchestrator | 2026-01-03 02:43:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:43:27.906331 | orchestrator | 2026-01-03 02:43:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:30.951614 | orchestrator | 2026-01-03 02:43:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:43:30.953234 | orchestrator | 2026-01-03 02:43:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:43:30.953361 | orchestrator | 2026-01-03 02:43:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:34.007811 | orchestrator | 2026-01-03 02:43:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:43:34.008697 | orchestrator | 2026-01-03 02:43:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:43:34.008856 | orchestrator | 2026-01-03 02:43:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:37.061929 | orchestrator | 2026-01-03 02:43:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:43:37.063621 | orchestrator | 2026-01-03 02:43:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:43:37.063670 | orchestrator | 2026-01-03 02:43:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:40.103740 | orchestrator | 2026-01-03 02:43:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:43:40.105500 | orchestrator | 2026-01-03 02:43:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:43:40.105543 | orchestrator | 2026-01-03 02:43:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:43.152264 | orchestrator | 2026-01-03 02:43:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:43:43.153392 | orchestrator | 2026-01-03 02:43:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:43:43.153506 | orchestrator | 2026-01-03 02:43:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:46.196015 | orchestrator | 2026-01-03 02:43:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:43:46.198256 | orchestrator | 2026-01-03 02:43:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:43:46.198304 | orchestrator | 2026-01-03 02:43:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:49.248944 | orchestrator | 2026-01-03 02:43:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:43:49.251271 | orchestrator | 2026-01-03 02:43:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:43:49.251314 | orchestrator | 2026-01-03 02:43:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:52.296348 | orchestrator | 2026-01-03 02:43:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:43:52.297777 | orchestrator | 2026-01-03 02:43:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:43:52.297820 | orchestrator | 2026-01-03 02:43:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:55.348395 | orchestrator | 2026-01-03 02:43:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:43:55.350873 | orchestrator | 2026-01-03 02:43:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:43:55.351039 | orchestrator | 2026-01-03 02:43:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:43:58.402407 | orchestrator | 2026-01-03 02:43:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:43:58.404157 | orchestrator | 2026-01-03 02:43:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:43:58.404233 | orchestrator | 2026-01-03 02:43:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:01.434056 | orchestrator | 2026-01-03 02:44:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:44:01.435511 | orchestrator | 2026-01-03 02:44:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:44:01.435543 | orchestrator | 2026-01-03 02:44:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:04.479396 | orchestrator | 2026-01-03 02:44:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:44:04.480979 | orchestrator | 2026-01-03 02:44:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:44:04.481048 | orchestrator | 2026-01-03 02:44:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:07.527250 | orchestrator | 2026-01-03 02:44:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:44:07.529151 | orchestrator | 2026-01-03 02:44:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:44:07.529237 | orchestrator | 2026-01-03 02:44:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:10.576561 | orchestrator | 2026-01-03 02:44:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:44:10.578308 | orchestrator | 2026-01-03 02:44:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:44:10.578374 | orchestrator | 2026-01-03 02:44:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:13.630532 | orchestrator | 2026-01-03 02:44:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:44:13.632458 | orchestrator | 2026-01-03 02:44:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:44:13.632539 | orchestrator | 2026-01-03 02:44:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:16.685234 | orchestrator | 2026-01-03 02:44:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:44:16.686798 | orchestrator | 2026-01-03 02:44:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:44:16.686853 | orchestrator | 2026-01-03 02:44:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:19.731868 | orchestrator | 2026-01-03 02:44:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:44:19.733289 | orchestrator | 2026-01-03 02:44:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:44:19.733339 | orchestrator | 2026-01-03 02:44:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:22.780083 | orchestrator | 2026-01-03 02:44:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:44:22.782377 | orchestrator | 2026-01-03 02:44:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:44:22.782493 | orchestrator | 2026-01-03 02:44:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:25.825398 | orchestrator | 2026-01-03 02:44:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:44:25.827329 | orchestrator | 2026-01-03 02:44:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:44:25.827392 | orchestrator | 2026-01-03 02:44:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:28.877341 | orchestrator | 2026-01-03 02:44:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:44:28.878828 | orchestrator | 2026-01-03 02:44:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:44:28.878889 | orchestrator | 2026-01-03 02:44:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:31.927466 | orchestrator | 2026-01-03 02:44:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:44:31.928743 | orchestrator | 2026-01-03 02:44:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:44:31.928857 | orchestrator | 2026-01-03 02:44:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:34.974189 | orchestrator | 2026-01-03 02:44:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:44:34.976322 | orchestrator | 2026-01-03 02:44:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:44:34.976444 | orchestrator | 2026-01-03 02:44:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:38.021421 | orchestrator | 2026-01-03 02:44:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:44:38.022122 | orchestrator | 2026-01-03 02:44:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:44:38.022169 | orchestrator | 2026-01-03 02:44:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:41.067007 | orchestrator | 2026-01-03 02:44:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:44:41.070161 | orchestrator | 2026-01-03 02:44:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:44:41.070231 | orchestrator | 2026-01-03 02:44:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:44.124340 | orchestrator | 2026-01-03 02:44:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:44:44.125676 | orchestrator | 2026-01-03 02:44:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:44:44.125709 | orchestrator | 2026-01-03 02:44:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:47.173015 | orchestrator | 2026-01-03 02:44:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:44:47.173339 | orchestrator | 2026-01-03 02:44:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:44:47.174219 | orchestrator | 2026-01-03 02:44:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:50.215845 | orchestrator | 2026-01-03 02:44:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:44:50.217906 | orchestrator | 2026-01-03 02:44:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:44:50.218006 | orchestrator | 2026-01-03 02:44:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:53.262220 | orchestrator | 2026-01-03 02:44:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:44:53.265133 | orchestrator | 2026-01-03 02:44:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:44:53.265188 | orchestrator | 2026-01-03 02:44:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:56.306345 | orchestrator | 2026-01-03 02:44:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:44:56.307741 | orchestrator | 2026-01-03 02:44:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:44:56.307802 | orchestrator | 2026-01-03 02:44:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:44:59.349184 | orchestrator | 2026-01-03 02:44:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:44:59.350795 | orchestrator | 2026-01-03 02:44:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:44:59.350840 | orchestrator | 2026-01-03 02:44:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:02.390978 | orchestrator | 2026-01-03 02:45:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:45:02.393066 | orchestrator | 2026-01-03 02:45:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:45:02.393110 | orchestrator | 2026-01-03 02:45:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:05.430306 | orchestrator | 2026-01-03 02:45:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:45:05.431530 | orchestrator | 2026-01-03 02:45:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:45:05.431658 | orchestrator | 2026-01-03 02:45:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:08.481189 | orchestrator | 2026-01-03 02:45:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:45:08.483133 | orchestrator | 2026-01-03 02:45:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:45:08.483242 | orchestrator | 2026-01-03 02:45:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:11.527349 | orchestrator | 2026-01-03 02:45:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:45:11.529227 | orchestrator | 2026-01-03 02:45:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:45:11.529297 | orchestrator | 2026-01-03 02:45:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:14.576397 | orchestrator | 2026-01-03 02:45:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:45:14.578162 | orchestrator | 2026-01-03 02:45:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:45:14.578444 | orchestrator | 2026-01-03 02:45:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:17.625338 | orchestrator | 2026-01-03 02:45:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:45:17.627056 | orchestrator | 2026-01-03 02:45:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:45:17.627110 | orchestrator | 2026-01-03 02:45:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:20.672607 | orchestrator | 2026-01-03 02:45:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:45:20.675440 | orchestrator | 2026-01-03 02:45:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:45:20.675573 | orchestrator | 2026-01-03 02:45:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:23.723538 | orchestrator | 2026-01-03 02:45:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:45:23.724608 | orchestrator | 2026-01-03 02:45:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:45:23.724905 | orchestrator | 2026-01-03 02:45:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:26.772613 | orchestrator | 2026-01-03 02:45:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:45:26.774494 | orchestrator | 2026-01-03 02:45:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:45:26.774685 | orchestrator | 2026-01-03 02:45:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:29.824004 | orchestrator | 2026-01-03 02:45:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:45:29.827630 | orchestrator | 2026-01-03 02:45:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:45:29.827703 | orchestrator | 2026-01-03 02:45:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:32.870182 | orchestrator | 2026-01-03 02:45:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:45:32.872308 | orchestrator | 2026-01-03 02:45:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:45:32.872362 | orchestrator | 2026-01-03 02:45:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:35.920602 | orchestrator | 2026-01-03 02:45:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:45:35.923281 | orchestrator | 2026-01-03 02:45:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:45:35.923353 | orchestrator | 2026-01-03 02:45:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:38.973559 | orchestrator | 2026-01-03 02:45:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:45:38.975722 | orchestrator | 2026-01-03 02:45:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:45:38.975795 | orchestrator | 2026-01-03 02:45:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:42.024055 | orchestrator | 2026-01-03 02:45:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:45:42.026421 | orchestrator | 2026-01-03 02:45:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:45:42.026501 | orchestrator | 2026-01-03 02:45:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:45.073341 | orchestrator | 2026-01-03 02:45:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:45:45.075691 | orchestrator | 2026-01-03 02:45:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:45:45.075891 | orchestrator | 2026-01-03 02:45:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:48.124465 | orchestrator | 2026-01-03 02:45:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:45:48.126321 | orchestrator | 2026-01-03 02:45:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:45:48.126397 | orchestrator | 2026-01-03 02:45:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:51.173627 | orchestrator | 2026-01-03 02:45:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:45:51.176703 | orchestrator | 2026-01-03 02:45:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:45:51.176766 | orchestrator | 2026-01-03 02:45:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:54.221661 | orchestrator | 2026-01-03 02:45:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:45:54.222572 | orchestrator | 2026-01-03 02:45:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:45:54.222708 | orchestrator | 2026-01-03 02:45:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:45:57.278291 | orchestrator | 2026-01-03 02:45:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:45:57.280958 | orchestrator | 2026-01-03 02:45:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:45:57.281012 | orchestrator | 2026-01-03 02:45:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:00.332561 | orchestrator | 2026-01-03 02:46:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:46:00.333838 | orchestrator | 2026-01-03 02:46:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:46:00.333879 | orchestrator | 2026-01-03 02:46:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:03.382626 | orchestrator | 2026-01-03 02:46:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:46:03.383549 | orchestrator | 2026-01-03 02:46:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:46:03.383639 | orchestrator | 2026-01-03 02:46:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:06.425046 | orchestrator | 2026-01-03 02:46:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:46:06.426749 | orchestrator | 2026-01-03 02:46:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:46:06.426820 | orchestrator | 2026-01-03 02:46:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:09.473422 | orchestrator | 2026-01-03 02:46:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:46:09.475055 | orchestrator | 2026-01-03 02:46:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:46:09.475134 | orchestrator | 2026-01-03 02:46:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:12.519807 | orchestrator | 2026-01-03 02:46:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:46:12.521495 | orchestrator | 2026-01-03 02:46:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:46:12.521543 | orchestrator | 2026-01-03 02:46:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:15.570276 | orchestrator | 2026-01-03 02:46:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:46:15.572498 | orchestrator | 2026-01-03 02:46:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:46:15.572557 | orchestrator | 2026-01-03 02:46:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:18.623235 | orchestrator | 2026-01-03 02:46:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:46:18.625058 | orchestrator | 2026-01-03 02:46:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:46:18.625177 | orchestrator | 2026-01-03 02:46:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:21.672427 | orchestrator | 2026-01-03 02:46:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:46:21.675044 | orchestrator | 2026-01-03 02:46:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:46:21.675136 | orchestrator | 2026-01-03 02:46:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:24.723075 | orchestrator | 2026-01-03 02:46:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:46:24.724739 | orchestrator | 2026-01-03 02:46:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:46:24.724795 | orchestrator | 2026-01-03 02:46:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:27.771681 | orchestrator | 2026-01-03 02:46:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:46:27.774330 | orchestrator | 2026-01-03 02:46:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:46:27.774415 | orchestrator | 2026-01-03 02:46:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:30.821663 | orchestrator | 2026-01-03 02:46:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:46:30.823952 | orchestrator | 2026-01-03 02:46:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:46:30.824003 | orchestrator | 2026-01-03 02:46:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:33.874554 | orchestrator | 2026-01-03 02:46:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:46:33.876470 | orchestrator | 2026-01-03 02:46:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:46:33.876537 | orchestrator | 2026-01-03 02:46:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:36.925444 | orchestrator | 2026-01-03 02:46:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:46:36.928034 | orchestrator | 2026-01-03 02:46:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:46:36.928278 | orchestrator | 2026-01-03 02:46:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:39.976152 | orchestrator | 2026-01-03 02:46:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:46:39.977397 | orchestrator | 2026-01-03 02:46:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:46:39.977715 | orchestrator | 2026-01-03 02:46:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:43.025835 | orchestrator | 2026-01-03 02:46:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:46:43.027308 | orchestrator | 2026-01-03 02:46:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:46:43.027370 | orchestrator | 2026-01-03 02:46:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:46.076096 | orchestrator | 2026-01-03 02:46:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:46:46.078857 | orchestrator | 2026-01-03 02:46:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:46:46.078934 | orchestrator | 2026-01-03 02:46:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:49.128585 | orchestrator | 2026-01-03 02:46:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:46:49.129882 | orchestrator | 2026-01-03 02:46:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:46:49.129941 | orchestrator | 2026-01-03 02:46:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:52.180827 | orchestrator | 2026-01-03 02:46:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:46:52.183785 | orchestrator | 2026-01-03 02:46:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:46:52.183848 | orchestrator | 2026-01-03 02:46:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:55.222396 | orchestrator | 2026-01-03 02:46:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:46:55.222627 | orchestrator | 2026-01-03 02:46:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:46:55.223471 | orchestrator | 2026-01-03 02:46:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:46:58.266657 | orchestrator | 2026-01-03 02:46:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:46:58.268648 | orchestrator | 2026-01-03 02:46:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:46:58.268706 | orchestrator | 2026-01-03 02:46:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:01.319517 | orchestrator | 2026-01-03 02:47:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:47:01.322199 | orchestrator | 2026-01-03 02:47:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:47:01.322254 | orchestrator | 2026-01-03 02:47:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:04.373432 | orchestrator | 2026-01-03 02:47:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:47:04.374801 | orchestrator | 2026-01-03 02:47:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:47:04.375069 | orchestrator | 2026-01-03 02:47:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:07.414113 | orchestrator | 2026-01-03 02:47:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:47:07.415822 | orchestrator | 2026-01-03 02:47:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:47:07.415962 | orchestrator | 2026-01-03 02:47:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:10.462514 | orchestrator | 2026-01-03 02:47:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:47:10.464226 | orchestrator | 2026-01-03 02:47:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:47:10.464305 | orchestrator | 2026-01-03 02:47:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:13.511392 | orchestrator | 2026-01-03 02:47:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:47:13.513415 | orchestrator | 2026-01-03 02:47:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:47:13.513499 | orchestrator | 2026-01-03 02:47:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:16.556431 | orchestrator | 2026-01-03 02:47:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:47:16.558473 | orchestrator | 2026-01-03 02:47:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:47:16.558540 | orchestrator | 2026-01-03 02:47:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:19.608877 | orchestrator | 2026-01-03 02:47:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:47:19.610410 | orchestrator | 2026-01-03 02:47:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:47:19.610474 | orchestrator | 2026-01-03 02:47:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:22.648724 | orchestrator | 2026-01-03 02:47:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:47:22.650697 | orchestrator | 2026-01-03 02:47:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:47:22.650755 | orchestrator | 2026-01-03 02:47:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:25.698799 | orchestrator | 2026-01-03 02:47:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:47:25.700581 | orchestrator | 2026-01-03 02:47:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:47:25.700636 | orchestrator | 2026-01-03 02:47:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:28.744699 | orchestrator | 2026-01-03 02:47:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:47:28.748446 | orchestrator | 2026-01-03 02:47:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:47:28.748501 | orchestrator | 2026-01-03 02:47:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:31.802397 | orchestrator | 2026-01-03 02:47:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:47:31.803431 | orchestrator | 2026-01-03 02:47:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:47:31.803514 | orchestrator | 2026-01-03 02:47:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:34.852724 | orchestrator | 2026-01-03 02:47:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:47:34.854571 | orchestrator | 2026-01-03 02:47:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:47:34.854630 | orchestrator | 2026-01-03 02:47:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:37.904117 | orchestrator | 2026-01-03 02:47:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:47:37.907370 | orchestrator | 2026-01-03 02:47:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:47:37.907438 | orchestrator | 2026-01-03 02:47:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:40.955293 | orchestrator | 2026-01-03 02:47:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:47:40.958720 | orchestrator | 2026-01-03 02:47:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:47:40.958791 | orchestrator | 2026-01-03 02:47:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:44.010761 | orchestrator | 2026-01-03 02:47:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:47:44.012615 | orchestrator | 2026-01-03 02:47:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:47:44.012679 | orchestrator | 2026-01-03 02:47:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:47.063472 | orchestrator | 2026-01-03 02:47:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:47:47.065028 | orchestrator | 2026-01-03 02:47:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:47:47.065094 | orchestrator | 2026-01-03 02:47:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:50.126647 | orchestrator | 2026-01-03 02:47:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:47:50.128586 | orchestrator | 2026-01-03 02:47:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:47:50.128644 | orchestrator | 2026-01-03 02:47:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:53.184742 | orchestrator | 2026-01-03 02:47:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:47:53.186856 | orchestrator | 2026-01-03 02:47:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:47:53.186970 | orchestrator | 2026-01-03 02:47:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:56.240897 | orchestrator | 2026-01-03 02:47:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:47:56.242735 | orchestrator | 2026-01-03 02:47:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:47:56.242793 | orchestrator | 2026-01-03 02:47:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:47:59.289247 | orchestrator | 2026-01-03 02:47:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:47:59.291294 | orchestrator | 2026-01-03 02:47:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:47:59.291426 | orchestrator | 2026-01-03 02:47:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:02.343853 | orchestrator | 2026-01-03 02:48:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:48:02.347291 | orchestrator | 2026-01-03 02:48:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:48:02.347461 | orchestrator | 2026-01-03 02:48:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:05.397123 | orchestrator | 2026-01-03 02:48:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:48:05.398119 | orchestrator | 2026-01-03 02:48:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:48:05.398171 | orchestrator | 2026-01-03 02:48:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:08.452519 | orchestrator | 2026-01-03 02:48:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:48:08.455895 | orchestrator | 2026-01-03 02:48:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:48:08.455980 | orchestrator | 2026-01-03 02:48:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:11.499520 | orchestrator | 2026-01-03 02:48:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:48:11.501252 | orchestrator | 2026-01-03 02:48:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:48:11.501433 | orchestrator | 2026-01-03 02:48:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:14.554482 | orchestrator | 2026-01-03 02:48:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:48:14.556230 | orchestrator | 2026-01-03 02:48:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:48:14.556400 | orchestrator | 2026-01-03 02:48:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:17.606008 | orchestrator | 2026-01-03 02:48:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:48:17.607279 | orchestrator | 2026-01-03 02:48:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:48:17.607318 | orchestrator | 2026-01-03 02:48:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:20.650158 | orchestrator | 2026-01-03 02:48:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:48:20.651415 | orchestrator | 2026-01-03 02:48:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:48:20.651569 | orchestrator | 2026-01-03 02:48:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:23.695429 | orchestrator | 2026-01-03 02:48:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:48:23.697056 | orchestrator | 2026-01-03 02:48:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:48:23.697108 | orchestrator | 2026-01-03 02:48:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:26.739455 | orchestrator | 2026-01-03 02:48:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:48:26.741148 | orchestrator | 2026-01-03 02:48:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:48:26.741246 | orchestrator | 2026-01-03 02:48:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:29.790840 | orchestrator | 2026-01-03 02:48:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:48:29.793285 | orchestrator | 2026-01-03 02:48:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:48:29.793501 | orchestrator | 2026-01-03 02:48:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:32.841411 | orchestrator | 2026-01-03 02:48:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:48:32.842005 | orchestrator | 2026-01-03 02:48:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:48:32.842320 | orchestrator | 2026-01-03 02:48:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:35.890501 | orchestrator | 2026-01-03 02:48:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:48:35.892271 | orchestrator | 2026-01-03 02:48:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:48:35.892340 | orchestrator | 2026-01-03 02:48:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:38.941939 | orchestrator | 2026-01-03 02:48:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:48:38.942500 | orchestrator | 2026-01-03 02:48:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:48:38.942864 | orchestrator | 2026-01-03 02:48:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:41.982129 | orchestrator | 2026-01-03 02:48:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:48:41.983941 | orchestrator | 2026-01-03 02:48:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:48:41.984001 | orchestrator | 2026-01-03 02:48:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:45.029983 | orchestrator | 2026-01-03 02:48:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:48:45.031726 | orchestrator | 2026-01-03 02:48:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:48:45.031799 | orchestrator | 2026-01-03 02:48:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:48.090527 | orchestrator | 2026-01-03 02:48:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:48:48.090609 | orchestrator | 2026-01-03 02:48:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:48:48.090662 | orchestrator | 2026-01-03 02:48:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:51.131664 | orchestrator | 2026-01-03 02:48:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:48:51.133340 | orchestrator | 2026-01-03 02:48:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:48:51.133398 | orchestrator | 2026-01-03 02:48:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:54.182371 | orchestrator | 2026-01-03 02:48:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:48:54.184615 | orchestrator | 2026-01-03 02:48:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:48:54.184670 | orchestrator | 2026-01-03 02:48:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:48:57.242080 | orchestrator | 2026-01-03 02:48:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:48:57.244877 | orchestrator | 2026-01-03 02:48:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:48:57.244952 | orchestrator | 2026-01-03 02:48:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:00.294367 | orchestrator | 2026-01-03 02:49:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:49:00.295833 | orchestrator | 2026-01-03 02:49:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:49:00.295889 | orchestrator | 2026-01-03 02:49:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:03.348283 | orchestrator | 2026-01-03 02:49:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:49:03.350168 | orchestrator | 2026-01-03 02:49:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:49:03.350197 | orchestrator | 2026-01-03 02:49:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:06.397821 | orchestrator | 2026-01-03 02:49:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:49:06.399778 | orchestrator | 2026-01-03 02:49:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:49:06.399824 | orchestrator | 2026-01-03 02:49:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:09.444905 | orchestrator | 2026-01-03 02:49:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:49:09.446434 | orchestrator | 2026-01-03 02:49:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:49:09.446484 | orchestrator | 2026-01-03 02:49:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:12.497845 | orchestrator | 2026-01-03 02:49:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:49:12.499224 | orchestrator | 2026-01-03 02:49:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:49:12.499318 | orchestrator | 2026-01-03 02:49:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:15.539159 | orchestrator | 2026-01-03 02:49:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:49:15.540770 | orchestrator | 2026-01-03 02:49:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:49:15.540898 | orchestrator | 2026-01-03 02:49:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:18.586448 | orchestrator | 2026-01-03 02:49:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:49:18.587909 | orchestrator | 2026-01-03 02:49:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:49:18.587952 | orchestrator | 2026-01-03 02:49:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:21.632371 | orchestrator | 2026-01-03 02:49:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:49:21.634694 | orchestrator | 2026-01-03 02:49:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:49:21.634810 | orchestrator | 2026-01-03 02:49:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:24.683540 | orchestrator | 2026-01-03 02:49:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:49:24.686180 | orchestrator | 2026-01-03 02:49:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:49:24.686331 | orchestrator | 2026-01-03 02:49:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:27.733195 | orchestrator | 2026-01-03 02:49:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:49:27.734595 | orchestrator | 2026-01-03 02:49:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:49:27.734644 | orchestrator | 2026-01-03 02:49:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:30.772564 | orchestrator | 2026-01-03 02:49:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:49:30.773946 | orchestrator | 2026-01-03 02:49:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:49:30.774931 | orchestrator | 2026-01-03 02:49:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:33.813405 | orchestrator | 2026-01-03 02:49:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:49:33.815936 | orchestrator | 2026-01-03 02:49:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:49:33.816062 | orchestrator | 2026-01-03 02:49:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:36.855669 | orchestrator | 2026-01-03 02:49:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:49:36.857366 | orchestrator | 2026-01-03 02:49:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:49:36.857446 | orchestrator | 2026-01-03 02:49:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:39.899707 | orchestrator | 2026-01-03 02:49:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:49:39.901558 | orchestrator | 2026-01-03 02:49:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:49:39.901645 | orchestrator | 2026-01-03 02:49:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:42.951521 | orchestrator | 2026-01-03 02:49:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:49:42.952844 | orchestrator | 2026-01-03 02:49:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:49:42.952882 | orchestrator | 2026-01-03 02:49:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:45.988956 | orchestrator | 2026-01-03 02:49:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:49:45.991316 | orchestrator | 2026-01-03 02:49:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:49:45.991419 | orchestrator | 2026-01-03 02:49:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:49.031766 | orchestrator | 2026-01-03 02:49:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:49:49.032092 | orchestrator | 2026-01-03 02:49:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:49:49.032196 | orchestrator | 2026-01-03 02:49:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:52.078555 | orchestrator | 2026-01-03 02:49:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:49:52.080406 | orchestrator | 2026-01-03 02:49:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:49:52.080571 | orchestrator | 2026-01-03 02:49:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:55.117671 | orchestrator | 2026-01-03 02:49:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:49:55.118145 | orchestrator | 2026-01-03 02:49:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:49:55.118176 | orchestrator | 2026-01-03 02:49:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:49:58.169571 | orchestrator | 2026-01-03 02:49:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:49:58.172640 | orchestrator | 2026-01-03 02:49:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:49:58.172740 | orchestrator | 2026-01-03 02:49:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:01.214930 | orchestrator | 2026-01-03 02:50:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:50:01.215321 | orchestrator | 2026-01-03 02:50:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:50:01.215351 | orchestrator | 2026-01-03 02:50:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:04.263042 | orchestrator | 2026-01-03 02:50:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:50:04.264991 | orchestrator | 2026-01-03 02:50:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:50:04.265078 | orchestrator | 2026-01-03 02:50:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:07.309375 | orchestrator | 2026-01-03 02:50:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:50:07.310812 | orchestrator | 2026-01-03 02:50:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:50:07.311343 | orchestrator | 2026-01-03 02:50:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:10.358181 | orchestrator | 2026-01-03 02:50:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:50:10.359807 | orchestrator | 2026-01-03 02:50:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:50:10.359871 | orchestrator | 2026-01-03 02:50:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:13.406088 | orchestrator | 2026-01-03 02:50:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:50:13.407772 | orchestrator | 2026-01-03 02:50:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:50:13.407867 | orchestrator | 2026-01-03 02:50:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:16.447382 | orchestrator | 2026-01-03 02:50:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:50:16.448671 | orchestrator | 2026-01-03 02:50:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:50:16.448754 | orchestrator | 2026-01-03 02:50:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:19.491529 | orchestrator | 2026-01-03 02:50:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:50:19.495439 | orchestrator | 2026-01-03 02:50:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:50:19.495517 | orchestrator | 2026-01-03 02:50:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:22.537737 | orchestrator | 2026-01-03 02:50:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:50:22.538740 | orchestrator | 2026-01-03 02:50:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:50:22.538790 | orchestrator | 2026-01-03 02:50:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:25.581915 | orchestrator | 2026-01-03 02:50:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:50:25.582850 | orchestrator | 2026-01-03 02:50:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:50:25.582882 | orchestrator | 2026-01-03 02:50:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:28.632170 | orchestrator | 2026-01-03 02:50:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:50:28.634162 | orchestrator | 2026-01-03 02:50:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:50:28.634212 | orchestrator | 2026-01-03 02:50:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:31.683420 | orchestrator | 2026-01-03 02:50:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:50:31.684676 | orchestrator | 2026-01-03 02:50:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:50:31.684735 | orchestrator | 2026-01-03 02:50:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:34.734783 | orchestrator | 2026-01-03 02:50:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:50:34.736955 | orchestrator | 2026-01-03 02:50:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:50:34.737012 | orchestrator | 2026-01-03 02:50:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:37.777686 | orchestrator | 2026-01-03 02:50:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:50:37.779086 | orchestrator | 2026-01-03 02:50:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:50:37.779112 | orchestrator | 2026-01-03 02:50:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:40.818980 | orchestrator | 2026-01-03 02:50:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:50:40.820611 | orchestrator | 2026-01-03 02:50:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:50:40.820689 | orchestrator | 2026-01-03 02:50:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:43.863930 | orchestrator | 2026-01-03 02:50:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:50:43.866725 | orchestrator | 2026-01-03 02:50:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:50:43.866821 | orchestrator | 2026-01-03 02:50:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:46.912259 | orchestrator | 2026-01-03 02:50:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:50:46.914850 | orchestrator | 2026-01-03 02:50:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:50:46.914924 | orchestrator | 2026-01-03 02:50:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:49.951828 | orchestrator | 2026-01-03 02:50:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:50:49.952841 | orchestrator | 2026-01-03 02:50:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:50:49.952895 | orchestrator | 2026-01-03 02:50:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:52.996022 | orchestrator | 2026-01-03 02:50:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:50:52.999802 | orchestrator | 2026-01-03 02:50:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:50:52.999955 | orchestrator | 2026-01-03 02:50:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:56.039905 | orchestrator | 2026-01-03 02:50:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:50:56.041514 | orchestrator | 2026-01-03 02:50:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:50:56.041614 | orchestrator | 2026-01-03 02:50:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:50:59.089391 | orchestrator | 2026-01-03 02:50:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:50:59.090494 | orchestrator | 2026-01-03 02:50:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:50:59.090625 | orchestrator | 2026-01-03 02:50:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:02.150308 | orchestrator | 2026-01-03 02:51:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:51:02.152816 | orchestrator | 2026-01-03 02:51:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:51:02.152883 | orchestrator | 2026-01-03 02:51:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:05.203536 | orchestrator | 2026-01-03 02:51:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:51:05.205866 | orchestrator | 2026-01-03 02:51:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:51:05.205967 | orchestrator | 2026-01-03 02:51:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:08.256809 | orchestrator | 2026-01-03 02:51:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:51:08.258844 | orchestrator | 2026-01-03 02:51:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:51:08.258955 | orchestrator | 2026-01-03 02:51:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:11.306652 | orchestrator | 2026-01-03 02:51:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:51:11.308863 | orchestrator | 2026-01-03 02:51:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:51:11.308938 | orchestrator | 2026-01-03 02:51:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:14.348548 | orchestrator | 2026-01-03 02:51:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:51:14.349837 | orchestrator | 2026-01-03 02:51:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:51:14.349904 | orchestrator | 2026-01-03 02:51:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:17.397800 | orchestrator | 2026-01-03 02:51:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:51:17.399940 | orchestrator | 2026-01-03 02:51:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:51:17.400019 | orchestrator | 2026-01-03 02:51:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:20.441709 | orchestrator | 2026-01-03 02:51:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:51:20.443123 | orchestrator | 2026-01-03 02:51:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:51:20.443209 | orchestrator | 2026-01-03 02:51:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:23.493195 | orchestrator | 2026-01-03 02:51:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:51:23.494112 | orchestrator | 2026-01-03 02:51:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:51:23.494225 | orchestrator | 2026-01-03 02:51:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:26.532622 | orchestrator | 2026-01-03 02:51:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:51:26.533895 | orchestrator | 2026-01-03 02:51:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:51:26.533939 | orchestrator | 2026-01-03 02:51:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:29.580074 | orchestrator | 2026-01-03 02:51:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:51:29.583114 | orchestrator | 2026-01-03 02:51:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:51:29.583325 | orchestrator | 2026-01-03 02:51:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:32.634385 | orchestrator | 2026-01-03 02:51:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:51:32.636723 | orchestrator | 2026-01-03 02:51:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:51:32.636862 | orchestrator | 2026-01-03 02:51:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:35.679081 | orchestrator | 2026-01-03 02:51:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:51:35.681533 | orchestrator | 2026-01-03 02:51:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:51:35.681632 | orchestrator | 2026-01-03 02:51:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:38.728373 | orchestrator | 2026-01-03 02:51:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:51:38.730284 | orchestrator | 2026-01-03 02:51:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:51:38.730479 | orchestrator | 2026-01-03 02:51:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:41.774919 | orchestrator | 2026-01-03 02:51:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:51:41.776940 | orchestrator | 2026-01-03 02:51:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:51:41.777003 | orchestrator | 2026-01-03 02:51:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:44.814851 | orchestrator | 2026-01-03 02:51:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:51:44.816483 | orchestrator | 2026-01-03 02:51:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:51:44.816925 | orchestrator | 2026-01-03 02:51:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:47.868696 | orchestrator | 2026-01-03 02:51:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:51:47.871161 | orchestrator | 2026-01-03 02:51:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:51:47.871319 | orchestrator | 2026-01-03 02:51:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:50.905030 | orchestrator | 2026-01-03 02:51:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:51:50.905655 | orchestrator | 2026-01-03 02:51:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:51:50.905689 | orchestrator | 2026-01-03 02:51:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:53.945201 | orchestrator | 2026-01-03 02:51:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:51:53.947092 | orchestrator | 2026-01-03 02:51:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:51:53.947159 | orchestrator | 2026-01-03 02:51:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:51:56.990167 | orchestrator | 2026-01-03 02:51:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:51:56.991655 | orchestrator | 2026-01-03 02:51:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:51:56.991718 | orchestrator | 2026-01-03 02:51:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:00.038401 | orchestrator | 2026-01-03 02:52:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:52:00.039204 | orchestrator | 2026-01-03 02:52:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:52:00.039244 | orchestrator | 2026-01-03 02:52:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:03.093217 | orchestrator | 2026-01-03 02:52:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:52:03.095832 | orchestrator | 2026-01-03 02:52:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:52:03.095916 | orchestrator | 2026-01-03 02:52:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:06.133328 | orchestrator | 2026-01-03 02:52:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:52:06.134054 | orchestrator | 2026-01-03 02:52:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:52:06.134494 | orchestrator | 2026-01-03 02:52:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:09.170903 | orchestrator | 2026-01-03 02:52:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:52:09.172042 | orchestrator | 2026-01-03 02:52:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:52:09.172093 | orchestrator | 2026-01-03 02:52:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:12.215717 | orchestrator | 2026-01-03 02:52:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:52:12.217824 | orchestrator | 2026-01-03 02:52:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:52:12.217906 | orchestrator | 2026-01-03 02:52:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:15.260314 | orchestrator | 2026-01-03 02:52:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:52:15.261612 | orchestrator | 2026-01-03 02:52:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:52:15.261654 | orchestrator | 2026-01-03 02:52:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:18.301986 | orchestrator | 2026-01-03 02:52:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:52:18.302982 | orchestrator | 2026-01-03 02:52:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:52:18.303825 | orchestrator | 2026-01-03 02:52:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:21.344113 | orchestrator | 2026-01-03 02:52:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:52:21.347182 | orchestrator | 2026-01-03 02:52:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:52:21.347650 | orchestrator | 2026-01-03 02:52:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:24.391803 | orchestrator | 2026-01-03 02:52:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:52:24.393989 | orchestrator | 2026-01-03 02:52:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:52:24.394098 | orchestrator | 2026-01-03 02:52:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:27.448331 | orchestrator | 2026-01-03 02:52:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:52:27.451290 | orchestrator | 2026-01-03 02:52:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:52:27.451505 | orchestrator | 2026-01-03 02:52:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:30.502978 | orchestrator | 2026-01-03 02:52:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:52:30.504875 | orchestrator | 2026-01-03 02:52:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:52:30.505058 | orchestrator | 2026-01-03 02:52:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:33.564036 | orchestrator | 2026-01-03 02:52:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:52:33.565807 | orchestrator | 2026-01-03 02:52:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:52:33.565849 | orchestrator | 2026-01-03 02:52:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:36.614828 | orchestrator | 2026-01-03 02:52:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:52:36.618921 | orchestrator | 2026-01-03 02:52:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:52:36.619007 | orchestrator | 2026-01-03 02:52:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:39.658980 | orchestrator | 2026-01-03 02:52:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:52:39.660648 | orchestrator | 2026-01-03 02:52:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:52:39.660946 | orchestrator | 2026-01-03 02:52:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:42.710235 | orchestrator | 2026-01-03 02:52:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:52:42.712082 | orchestrator | 2026-01-03 02:52:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:52:42.712369 | orchestrator | 2026-01-03 02:52:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:45.755650 | orchestrator | 2026-01-03 02:52:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:52:45.758090 | orchestrator | 2026-01-03 02:52:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:52:45.758161 | orchestrator | 2026-01-03 02:52:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:48.803582 | orchestrator | 2026-01-03 02:52:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:52:48.806576 | orchestrator | 2026-01-03 02:52:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:52:48.806655 | orchestrator | 2026-01-03 02:52:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:51.845035 | orchestrator | 2026-01-03 02:52:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:52:51.846836 | orchestrator | 2026-01-03 02:52:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:52:51.846886 | orchestrator | 2026-01-03 02:52:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:54.895707 | orchestrator | 2026-01-03 02:52:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:52:54.897265 | orchestrator | 2026-01-03 02:52:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:52:54.897377 | orchestrator | 2026-01-03 02:52:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:52:57.941239 | orchestrator | 2026-01-03 02:52:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:52:57.942469 | orchestrator | 2026-01-03 02:52:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:52:57.942631 | orchestrator | 2026-01-03 02:52:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:00.988706 | orchestrator | 2026-01-03 02:53:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:53:00.989856 | orchestrator | 2026-01-03 02:53:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:53:00.990122 | orchestrator | 2026-01-03 02:53:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:04.034270 | orchestrator | 2026-01-03 02:53:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:53:04.037090 | orchestrator | 2026-01-03 02:53:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:53:04.037199 | orchestrator | 2026-01-03 02:53:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:07.080757 | orchestrator | 2026-01-03 02:53:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:53:07.084262 | orchestrator | 2026-01-03 02:53:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:53:07.084517 | orchestrator | 2026-01-03 02:53:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:10.120853 | orchestrator | 2026-01-03 02:53:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:53:10.121610 | orchestrator | 2026-01-03 02:53:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:53:10.121924 | orchestrator | 2026-01-03 02:53:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:13.167672 | orchestrator | 2026-01-03 02:53:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:53:13.170391 | orchestrator | 2026-01-03 02:53:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:53:13.170483 | orchestrator | 2026-01-03 02:53:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:16.210621 | orchestrator | 2026-01-03 02:53:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:53:16.212648 | orchestrator | 2026-01-03 02:53:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:53:16.212712 | orchestrator | 2026-01-03 02:53:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:19.259219 | orchestrator | 2026-01-03 02:53:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:53:19.260661 | orchestrator | 2026-01-03 02:53:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:53:19.260729 | orchestrator | 2026-01-03 02:53:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:22.308302 | orchestrator | 2026-01-03 02:53:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:53:22.310561 | orchestrator | 2026-01-03 02:53:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:53:22.310619 | orchestrator | 2026-01-03 02:53:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:25.353581 | orchestrator | 2026-01-03 02:53:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:53:25.356217 | orchestrator | 2026-01-03 02:53:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:53:25.356334 | orchestrator | 2026-01-03 02:53:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:28.409604 | orchestrator | 2026-01-03 02:53:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:53:28.411535 | orchestrator | 2026-01-03 02:53:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:53:28.411578 | orchestrator | 2026-01-03 02:53:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:31.465631 | orchestrator | 2026-01-03 02:53:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:53:31.467667 | orchestrator | 2026-01-03 02:53:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:53:31.467747 | orchestrator | 2026-01-03 02:53:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:34.515383 | orchestrator | 2026-01-03 02:53:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:53:34.516770 | orchestrator | 2026-01-03 02:53:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:53:34.516979 | orchestrator | 2026-01-03 02:53:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:37.562195 | orchestrator | 2026-01-03 02:53:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:53:37.563767 | orchestrator | 2026-01-03 02:53:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:53:37.563888 | orchestrator | 2026-01-03 02:53:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:40.614547 | orchestrator | 2026-01-03 02:53:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:53:40.616253 | orchestrator | 2026-01-03 02:53:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:53:40.616394 | orchestrator | 2026-01-03 02:53:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:43.666364 | orchestrator | 2026-01-03 02:53:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:53:43.667869 | orchestrator | 2026-01-03 02:53:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:53:43.667942 | orchestrator | 2026-01-03 02:53:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:46.715884 | orchestrator | 2026-01-03 02:53:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:53:46.718194 | orchestrator | 2026-01-03 02:53:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:53:46.718245 | orchestrator | 2026-01-03 02:53:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:49.773805 | orchestrator | 2026-01-03 02:53:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:53:49.776254 | orchestrator | 2026-01-03 02:53:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:53:49.776319 | orchestrator | 2026-01-03 02:53:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:52.822999 | orchestrator | 2026-01-03 02:53:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:53:52.824375 | orchestrator | 2026-01-03 02:53:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:53:52.824450 | orchestrator | 2026-01-03 02:53:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:55.872839 | orchestrator | 2026-01-03 02:53:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:53:55.873889 | orchestrator | 2026-01-03 02:53:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:53:55.873950 | orchestrator | 2026-01-03 02:53:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:53:58.922288 | orchestrator | 2026-01-03 02:53:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:53:58.923868 | orchestrator | 2026-01-03 02:53:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:53:58.923968 | orchestrator | 2026-01-03 02:53:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:01.965654 | orchestrator | 2026-01-03 02:54:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:54:01.966852 | orchestrator | 2026-01-03 02:54:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:54:01.966888 | orchestrator | 2026-01-03 02:54:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:05.025583 | orchestrator | 2026-01-03 02:54:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:54:05.027775 | orchestrator | 2026-01-03 02:54:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:54:05.027855 | orchestrator | 2026-01-03 02:54:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:08.072023 | orchestrator | 2026-01-03 02:54:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:54:08.073591 | orchestrator | 2026-01-03 02:54:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:54:08.073643 | orchestrator | 2026-01-03 02:54:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:11.115200 | orchestrator | 2026-01-03 02:54:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:54:11.117683 | orchestrator | 2026-01-03 02:54:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:54:11.118002 | orchestrator | 2026-01-03 02:54:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:14.168294 | orchestrator | 2026-01-03 02:54:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:54:14.169695 | orchestrator | 2026-01-03 02:54:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:54:14.169728 | orchestrator | 2026-01-03 02:54:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:17.214597 | orchestrator | 2026-01-03 02:54:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:54:17.216804 | orchestrator | 2026-01-03 02:54:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:54:17.216913 | orchestrator | 2026-01-03 02:54:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:20.261805 | orchestrator | 2026-01-03 02:54:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:54:20.262693 | orchestrator | 2026-01-03 02:54:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:54:20.262710 | orchestrator | 2026-01-03 02:54:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:23.302980 | orchestrator | 2026-01-03 02:54:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:54:23.305879 | orchestrator | 2026-01-03 02:54:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:54:23.307003 | orchestrator | 2026-01-03 02:54:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:26.355185 | orchestrator | 2026-01-03 02:54:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:54:26.357150 | orchestrator | 2026-01-03 02:54:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:54:26.357375 | orchestrator | 2026-01-03 02:54:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:29.417563 | orchestrator | 2026-01-03 02:54:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:54:29.417642 | orchestrator | 2026-01-03 02:54:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:54:29.417650 | orchestrator | 2026-01-03 02:54:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:32.463374 | orchestrator | 2026-01-03 02:54:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:54:32.465048 | orchestrator | 2026-01-03 02:54:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:54:32.465126 | orchestrator | 2026-01-03 02:54:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:35.524702 | orchestrator | 2026-01-03 02:54:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:54:35.526556 | orchestrator | 2026-01-03 02:54:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:54:35.526690 | orchestrator | 2026-01-03 02:54:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:38.591490 | orchestrator | 2026-01-03 02:54:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:54:38.592661 | orchestrator | 2026-01-03 02:54:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:54:38.592712 | orchestrator | 2026-01-03 02:54:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:41.630076 | orchestrator | 2026-01-03 02:54:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:54:41.631561 | orchestrator | 2026-01-03 02:54:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:54:41.631637 | orchestrator | 2026-01-03 02:54:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:44.683892 | orchestrator | 2026-01-03 02:54:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:54:44.686248 | orchestrator | 2026-01-03 02:54:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:54:44.686626 | orchestrator | 2026-01-03 02:54:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:47.731566 | orchestrator | 2026-01-03 02:54:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:54:47.732087 | orchestrator | 2026-01-03 02:54:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:54:47.732107 | orchestrator | 2026-01-03 02:54:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:50.780581 | orchestrator | 2026-01-03 02:54:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:54:50.782561 | orchestrator | 2026-01-03 02:54:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:54:50.782658 | orchestrator | 2026-01-03 02:54:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:53.834899 | orchestrator | 2026-01-03 02:54:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:54:53.838639 | orchestrator | 2026-01-03 02:54:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:54:53.838725 | orchestrator | 2026-01-03 02:54:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:56.881733 | orchestrator | 2026-01-03 02:54:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:54:56.882498 | orchestrator | 2026-01-03 02:54:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:54:56.882539 | orchestrator | 2026-01-03 02:54:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:54:59.926099 | orchestrator | 2026-01-03 02:54:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:54:59.927557 | orchestrator | 2026-01-03 02:54:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:54:59.927675 | orchestrator | 2026-01-03 02:54:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:02.973315 | orchestrator | 2026-01-03 02:55:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:55:02.975385 | orchestrator | 2026-01-03 02:55:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:55:02.975593 | orchestrator | 2026-01-03 02:55:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:06.022540 | orchestrator | 2026-01-03 02:55:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:55:06.024389 | orchestrator | 2026-01-03 02:55:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:55:06.024493 | orchestrator | 2026-01-03 02:55:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:09.066507 | orchestrator | 2026-01-03 02:55:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:55:09.068341 | orchestrator | 2026-01-03 02:55:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:55:09.068380 | orchestrator | 2026-01-03 02:55:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:12.114723 | orchestrator | 2026-01-03 02:55:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:55:12.117780 | orchestrator | 2026-01-03 02:55:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:55:12.117932 | orchestrator | 2026-01-03 02:55:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:15.162901 | orchestrator | 2026-01-03 02:55:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:55:15.164802 | orchestrator | 2026-01-03 02:55:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:55:15.164863 | orchestrator | 2026-01-03 02:55:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:18.210879 | orchestrator | 2026-01-03 02:55:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:55:18.212301 | orchestrator | 2026-01-03 02:55:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:55:18.212355 | orchestrator | 2026-01-03 02:55:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:21.251236 | orchestrator | 2026-01-03 02:55:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:55:21.252776 | orchestrator | 2026-01-03 02:55:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:55:21.252822 | orchestrator | 2026-01-03 02:55:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:24.301381 | orchestrator | 2026-01-03 02:55:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:55:24.303764 | orchestrator | 2026-01-03 02:55:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:55:24.303810 | orchestrator | 2026-01-03 02:55:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:27.351245 | orchestrator | 2026-01-03 02:55:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:55:27.353619 | orchestrator | 2026-01-03 02:55:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:55:27.353746 | orchestrator | 2026-01-03 02:55:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:30.407142 | orchestrator | 2026-01-03 02:55:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:55:30.409084 | orchestrator | 2026-01-03 02:55:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:55:30.409180 | orchestrator | 2026-01-03 02:55:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:33.463543 | orchestrator | 2026-01-03 02:55:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:55:33.465021 | orchestrator | 2026-01-03 02:55:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:55:33.465098 | orchestrator | 2026-01-03 02:55:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:36.526346 | orchestrator | 2026-01-03 02:55:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:55:36.526532 | orchestrator | 2026-01-03 02:55:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:55:36.526553 | orchestrator | 2026-01-03 02:55:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:39.599401 | orchestrator | 2026-01-03 02:55:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:55:39.602235 | orchestrator | 2026-01-03 02:55:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:55:39.602314 | orchestrator | 2026-01-03 02:55:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:42.646897 | orchestrator | 2026-01-03 02:55:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:55:42.648529 | orchestrator | 2026-01-03 02:55:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:55:42.648571 | orchestrator | 2026-01-03 02:55:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:45.687846 | orchestrator | 2026-01-03 02:55:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:55:45.688384 | orchestrator | 2026-01-03 02:55:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:55:45.688418 | orchestrator | 2026-01-03 02:55:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:48.741854 | orchestrator | 2026-01-03 02:55:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:55:48.743398 | orchestrator | 2026-01-03 02:55:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:55:48.743463 | orchestrator | 2026-01-03 02:55:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:51.800211 | orchestrator | 2026-01-03 02:55:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:55:51.802269 | orchestrator | 2026-01-03 02:55:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:55:51.802321 | orchestrator | 2026-01-03 02:55:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:54.858371 | orchestrator | 2026-01-03 02:55:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:55:54.859551 | orchestrator | 2026-01-03 02:55:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:55:54.859660 | orchestrator | 2026-01-03 02:55:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:55:57.910200 | orchestrator | 2026-01-03 02:55:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:55:57.911984 | orchestrator | 2026-01-03 02:55:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:55:57.912044 | orchestrator | 2026-01-03 02:55:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:00.960689 | orchestrator | 2026-01-03 02:56:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:56:00.964035 | orchestrator | 2026-01-03 02:56:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:56:00.964164 | orchestrator | 2026-01-03 02:56:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:04.013883 | orchestrator | 2026-01-03 02:56:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:56:04.016155 | orchestrator | 2026-01-03 02:56:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:56:04.016209 | orchestrator | 2026-01-03 02:56:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:07.059781 | orchestrator | 2026-01-03 02:56:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:56:07.060039 | orchestrator | 2026-01-03 02:56:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:56:07.060063 | orchestrator | 2026-01-03 02:56:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:10.091230 | orchestrator | 2026-01-03 02:56:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:56:10.092742 | orchestrator | 2026-01-03 02:56:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:56:10.092818 | orchestrator | 2026-01-03 02:56:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:13.145353 | orchestrator | 2026-01-03 02:56:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:56:13.147796 | orchestrator | 2026-01-03 02:56:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:56:13.147851 | orchestrator | 2026-01-03 02:56:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:16.191681 | orchestrator | 2026-01-03 02:56:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:56:16.193100 | orchestrator | 2026-01-03 02:56:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:56:16.193195 | orchestrator | 2026-01-03 02:56:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:19.237810 | orchestrator | 2026-01-03 02:56:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:56:19.239814 | orchestrator | 2026-01-03 02:56:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:56:19.239862 | orchestrator | 2026-01-03 02:56:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:22.299471 | orchestrator | 2026-01-03 02:56:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:56:22.301686 | orchestrator | 2026-01-03 02:56:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:56:22.301744 | orchestrator | 2026-01-03 02:56:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:25.346875 | orchestrator | 2026-01-03 02:56:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:56:25.349658 | orchestrator | 2026-01-03 02:56:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:56:25.349701 | orchestrator | 2026-01-03 02:56:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:28.395018 | orchestrator | 2026-01-03 02:56:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:56:28.397416 | orchestrator | 2026-01-03 02:56:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:56:28.397525 | orchestrator | 2026-01-03 02:56:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:31.441090 | orchestrator | 2026-01-03 02:56:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:56:31.442856 | orchestrator | 2026-01-03 02:56:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:56:31.442916 | orchestrator | 2026-01-03 02:56:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:34.493182 | orchestrator | 2026-01-03 02:56:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:56:34.495228 | orchestrator | 2026-01-03 02:56:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:56:34.495333 | orchestrator | 2026-01-03 02:56:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:37.539793 | orchestrator | 2026-01-03 02:56:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:56:37.541433 | orchestrator | 2026-01-03 02:56:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:56:37.541622 | orchestrator | 2026-01-03 02:56:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:40.580808 | orchestrator | 2026-01-03 02:56:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:56:40.582414 | orchestrator | 2026-01-03 02:56:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:56:40.582552 | orchestrator | 2026-01-03 02:56:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:43.630955 | orchestrator | 2026-01-03 02:56:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:56:43.632937 | orchestrator | 2026-01-03 02:56:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:56:43.633357 | orchestrator | 2026-01-03 02:56:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:46.676986 | orchestrator | 2026-01-03 02:56:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:56:46.678593 | orchestrator | 2026-01-03 02:56:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:56:46.678712 | orchestrator | 2026-01-03 02:56:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:49.735589 | orchestrator | 2026-01-03 02:56:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:56:49.737136 | orchestrator | 2026-01-03 02:56:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:56:49.737271 | orchestrator | 2026-01-03 02:56:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:52.790956 | orchestrator | 2026-01-03 02:56:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:56:52.793262 | orchestrator | 2026-01-03 02:56:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:56:52.793385 | orchestrator | 2026-01-03 02:56:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:55.840391 | orchestrator | 2026-01-03 02:56:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:56:55.841725 | orchestrator | 2026-01-03 02:56:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:56:55.842171 | orchestrator | 2026-01-03 02:56:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:56:58.891428 | orchestrator | 2026-01-03 02:56:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:56:58.893315 | orchestrator | 2026-01-03 02:56:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:56:58.893424 | orchestrator | 2026-01-03 02:56:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:01.940387 | orchestrator | 2026-01-03 02:57:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:57:01.942073 | orchestrator | 2026-01-03 02:57:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:57:01.942130 | orchestrator | 2026-01-03 02:57:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:04.987131 | orchestrator | 2026-01-03 02:57:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:57:04.988437 | orchestrator | 2026-01-03 02:57:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:57:04.988487 | orchestrator | 2026-01-03 02:57:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:08.034359 | orchestrator | 2026-01-03 02:57:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:57:08.034993 | orchestrator | 2026-01-03 02:57:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:57:08.035020 | orchestrator | 2026-01-03 02:57:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:11.090941 | orchestrator | 2026-01-03 02:57:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:57:11.093489 | orchestrator | 2026-01-03 02:57:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:57:11.093696 | orchestrator | 2026-01-03 02:57:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:14.152108 | orchestrator | 2026-01-03 02:57:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:57:14.152799 | orchestrator | 2026-01-03 02:57:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:57:14.152841 | orchestrator | 2026-01-03 02:57:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:17.207226 | orchestrator | 2026-01-03 02:57:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:57:17.211380 | orchestrator | 2026-01-03 02:57:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:57:17.211483 | orchestrator | 2026-01-03 02:57:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:20.260599 | orchestrator | 2026-01-03 02:57:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:57:20.262418 | orchestrator | 2026-01-03 02:57:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:57:20.262785 | orchestrator | 2026-01-03 02:57:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:23.312245 | orchestrator | 2026-01-03 02:57:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:57:23.313936 | orchestrator | 2026-01-03 02:57:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:57:23.314162 | orchestrator | 2026-01-03 02:57:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:26.359872 | orchestrator | 2026-01-03 02:57:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:57:26.361385 | orchestrator | 2026-01-03 02:57:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:57:26.361453 | orchestrator | 2026-01-03 02:57:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:29.405040 | orchestrator | 2026-01-03 02:57:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:57:29.406248 | orchestrator | 2026-01-03 02:57:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:57:29.406324 | orchestrator | 2026-01-03 02:57:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:32.453848 | orchestrator | 2026-01-03 02:57:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:57:32.455710 | orchestrator | 2026-01-03 02:57:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:57:32.455756 | orchestrator | 2026-01-03 02:57:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:35.508246 | orchestrator | 2026-01-03 02:57:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:57:35.510158 | orchestrator | 2026-01-03 02:57:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:57:35.510359 | orchestrator | 2026-01-03 02:57:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:38.560706 | orchestrator | 2026-01-03 02:57:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:57:38.561679 | orchestrator | 2026-01-03 02:57:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:57:38.561714 | orchestrator | 2026-01-03 02:57:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:41.607335 | orchestrator | 2026-01-03 02:57:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:57:41.609101 | orchestrator | 2026-01-03 02:57:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:57:41.609219 | orchestrator | 2026-01-03 02:57:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:44.659233 | orchestrator | 2026-01-03 02:57:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:57:44.660976 | orchestrator | 2026-01-03 02:57:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:57:44.661030 | orchestrator | 2026-01-03 02:57:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:47.707222 | orchestrator | 2026-01-03 02:57:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:57:47.708703 | orchestrator | 2026-01-03 02:57:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:57:47.708749 | orchestrator | 2026-01-03 02:57:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:50.759364 | orchestrator | 2026-01-03 02:57:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:57:50.762312 | orchestrator | 2026-01-03 02:57:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:57:50.762485 | orchestrator | 2026-01-03 02:57:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:53.814134 | orchestrator | 2026-01-03 02:57:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:57:53.815158 | orchestrator | 2026-01-03 02:57:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:57:53.815217 | orchestrator | 2026-01-03 02:57:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:56.864733 | orchestrator | 2026-01-03 02:57:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:57:56.865043 | orchestrator | 2026-01-03 02:57:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:57:56.865086 | orchestrator | 2026-01-03 02:57:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:57:59.906855 | orchestrator | 2026-01-03 02:57:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:57:59.908102 | orchestrator | 2026-01-03 02:57:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:57:59.908147 | orchestrator | 2026-01-03 02:57:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:02.954453 | orchestrator | 2026-01-03 02:58:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:58:02.956470 | orchestrator | 2026-01-03 02:58:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:58:02.956633 | orchestrator | 2026-01-03 02:58:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:06.005970 | orchestrator | 2026-01-03 02:58:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:58:06.007915 | orchestrator | 2026-01-03 02:58:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:58:06.008185 | orchestrator | 2026-01-03 02:58:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:09.056898 | orchestrator | 2026-01-03 02:58:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:58:09.059142 | orchestrator | 2026-01-03 02:58:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:58:09.059211 | orchestrator | 2026-01-03 02:58:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:12.106861 | orchestrator | 2026-01-03 02:58:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:58:12.110829 | orchestrator | 2026-01-03 02:58:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:58:12.110893 | orchestrator | 2026-01-03 02:58:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:15.158125 | orchestrator | 2026-01-03 02:58:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:58:15.160768 | orchestrator | 2026-01-03 02:58:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:58:15.160852 | orchestrator | 2026-01-03 02:58:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:18.208693 | orchestrator | 2026-01-03 02:58:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:58:18.210893 | orchestrator | 2026-01-03 02:58:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:58:18.210969 | orchestrator | 2026-01-03 02:58:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:21.261704 | orchestrator | 2026-01-03 02:58:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:58:21.263090 | orchestrator | 2026-01-03 02:58:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:58:21.263164 | orchestrator | 2026-01-03 02:58:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:24.313622 | orchestrator | 2026-01-03 02:58:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:58:24.316244 | orchestrator | 2026-01-03 02:58:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:58:24.316303 | orchestrator | 2026-01-03 02:58:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:27.366408 | orchestrator | 2026-01-03 02:58:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:58:27.369590 | orchestrator | 2026-01-03 02:58:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:58:27.369663 | orchestrator | 2026-01-03 02:58:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:30.423474 | orchestrator | 2026-01-03 02:58:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:58:30.424011 | orchestrator | 2026-01-03 02:58:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:58:30.424107 | orchestrator | 2026-01-03 02:58:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:33.473959 | orchestrator | 2026-01-03 02:58:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:58:33.475590 | orchestrator | 2026-01-03 02:58:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:58:33.475650 | orchestrator | 2026-01-03 02:58:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:36.524026 | orchestrator | 2026-01-03 02:58:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:58:36.526125 | orchestrator | 2026-01-03 02:58:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:58:36.526230 | orchestrator | 2026-01-03 02:58:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:39.571970 | orchestrator | 2026-01-03 02:58:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:58:39.573664 | orchestrator | 2026-01-03 02:58:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:58:39.573701 | orchestrator | 2026-01-03 02:58:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:42.623468 | orchestrator | 2026-01-03 02:58:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:58:42.626156 | orchestrator | 2026-01-03 02:58:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:58:42.626241 | orchestrator | 2026-01-03 02:58:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:45.673155 | orchestrator | 2026-01-03 02:58:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:58:45.674248 | orchestrator | 2026-01-03 02:58:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:58:45.674301 | orchestrator | 2026-01-03 02:58:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:48.719201 | orchestrator | 2026-01-03 02:58:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:58:48.721108 | orchestrator | 2026-01-03 02:58:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:58:48.721155 | orchestrator | 2026-01-03 02:58:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:51.765970 | orchestrator | 2026-01-03 02:58:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:58:51.768025 | orchestrator | 2026-01-03 02:58:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:58:51.768655 | orchestrator | 2026-01-03 02:58:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:54.816584 | orchestrator | 2026-01-03 02:58:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:58:54.818832 | orchestrator | 2026-01-03 02:58:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:58:54.818896 | orchestrator | 2026-01-03 02:58:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:58:57.864901 | orchestrator | 2026-01-03 02:58:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:58:57.866886 | orchestrator | 2026-01-03 02:58:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:58:57.866960 | orchestrator | 2026-01-03 02:58:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:00.913755 | orchestrator | 2026-01-03 02:59:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:59:00.915435 | orchestrator | 2026-01-03 02:59:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:59:00.915499 | orchestrator | 2026-01-03 02:59:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:03.963001 | orchestrator | 2026-01-03 02:59:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:59:03.964165 | orchestrator | 2026-01-03 02:59:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:59:03.964224 | orchestrator | 2026-01-03 02:59:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:07.011884 | orchestrator | 2026-01-03 02:59:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:59:07.014106 | orchestrator | 2026-01-03 02:59:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:59:07.014185 | orchestrator | 2026-01-03 02:59:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:10.058258 | orchestrator | 2026-01-03 02:59:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:59:10.059634 | orchestrator | 2026-01-03 02:59:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:59:10.059692 | orchestrator | 2026-01-03 02:59:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:13.102663 | orchestrator | 2026-01-03 02:59:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:59:13.104760 | orchestrator | 2026-01-03 02:59:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:59:13.104829 | orchestrator | 2026-01-03 02:59:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:16.156287 | orchestrator | 2026-01-03 02:59:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:59:16.158088 | orchestrator | 2026-01-03 02:59:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:59:16.158138 | orchestrator | 2026-01-03 02:59:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:19.210564 | orchestrator | 2026-01-03 02:59:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:59:19.211579 | orchestrator | 2026-01-03 02:59:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:59:19.211602 | orchestrator | 2026-01-03 02:59:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:22.262980 | orchestrator | 2026-01-03 02:59:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:59:22.266950 | orchestrator | 2026-01-03 02:59:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:59:22.267063 | orchestrator | 2026-01-03 02:59:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:25.312216 | orchestrator | 2026-01-03 02:59:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:59:25.312797 | orchestrator | 2026-01-03 02:59:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:59:25.312814 | orchestrator | 2026-01-03 02:59:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:28.363338 | orchestrator | 2026-01-03 02:59:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:59:28.367721 | orchestrator | 2026-01-03 02:59:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:59:28.367784 | orchestrator | 2026-01-03 02:59:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:31.419774 | orchestrator | 2026-01-03 02:59:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:59:31.421652 | orchestrator | 2026-01-03 02:59:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:59:31.421685 | orchestrator | 2026-01-03 02:59:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:34.463322 | orchestrator | 2026-01-03 02:59:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:59:34.463727 | orchestrator | 2026-01-03 02:59:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:59:34.463754 | orchestrator | 2026-01-03 02:59:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:37.524023 | orchestrator | 2026-01-03 02:59:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:59:37.524116 | orchestrator | 2026-01-03 02:59:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:59:37.524130 | orchestrator | 2026-01-03 02:59:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:40.572970 | orchestrator | 2026-01-03 02:59:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:59:40.576624 | orchestrator | 2026-01-03 02:59:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:59:40.577226 | orchestrator | 2026-01-03 02:59:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:43.634745 | orchestrator | 2026-01-03 02:59:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:59:43.637501 | orchestrator | 2026-01-03 02:59:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:59:43.637645 | orchestrator | 2026-01-03 02:59:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:46.688385 | orchestrator | 2026-01-03 02:59:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:59:46.690073 | orchestrator | 2026-01-03 02:59:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:59:46.690132 | orchestrator | 2026-01-03 02:59:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:49.745069 | orchestrator | 2026-01-03 02:59:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:59:49.747203 | orchestrator | 2026-01-03 02:59:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:59:49.747260 | orchestrator | 2026-01-03 02:59:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:52.797186 | orchestrator | 2026-01-03 02:59:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:59:52.797387 | orchestrator | 2026-01-03 02:59:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:59:52.797412 | orchestrator | 2026-01-03 02:59:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:55.848868 | orchestrator | 2026-01-03 02:59:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:59:55.850705 | orchestrator | 2026-01-03 02:59:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:59:55.850814 | orchestrator | 2026-01-03 02:59:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 02:59:58.904775 | orchestrator | 2026-01-03 02:59:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 02:59:58.907268 | orchestrator | 2026-01-03 02:59:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 02:59:58.907322 | orchestrator | 2026-01-03 02:59:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:01.957842 | orchestrator | 2026-01-03 03:00:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:00:01.961661 | orchestrator | 2026-01-03 03:00:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:00:01.961800 | orchestrator | 2026-01-03 03:00:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:05.015641 | orchestrator | 2026-01-03 03:00:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:00:05.017779 | orchestrator | 2026-01-03 03:00:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:00:05.017835 | orchestrator | 2026-01-03 03:00:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:08.062755 | orchestrator | 2026-01-03 03:00:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:00:08.063474 | orchestrator | 2026-01-03 03:00:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:00:08.063548 | orchestrator | 2026-01-03 03:00:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:11.120514 | orchestrator | 2026-01-03 03:00:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:00:11.123092 | orchestrator | 2026-01-03 03:00:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:00:11.123184 | orchestrator | 2026-01-03 03:00:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:14.172065 | orchestrator | 2026-01-03 03:00:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:00:14.174219 | orchestrator | 2026-01-03 03:00:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:00:14.174312 | orchestrator | 2026-01-03 03:00:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:17.218913 | orchestrator | 2026-01-03 03:00:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:00:17.220143 | orchestrator | 2026-01-03 03:00:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:00:17.220181 | orchestrator | 2026-01-03 03:00:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:20.268404 | orchestrator | 2026-01-03 03:00:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:00:20.269341 | orchestrator | 2026-01-03 03:00:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:00:20.269370 | orchestrator | 2026-01-03 03:00:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:23.318582 | orchestrator | 2026-01-03 03:00:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:00:23.321024 | orchestrator | 2026-01-03 03:00:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:00:23.321201 | orchestrator | 2026-01-03 03:00:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:26.367871 | orchestrator | 2026-01-03 03:00:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:00:26.369942 | orchestrator | 2026-01-03 03:00:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:00:26.370052 | orchestrator | 2026-01-03 03:00:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:29.417046 | orchestrator | 2026-01-03 03:00:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:00:29.420948 | orchestrator | 2026-01-03 03:00:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:00:29.421011 | orchestrator | 2026-01-03 03:00:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:32.466076 | orchestrator | 2026-01-03 03:00:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:00:32.466189 | orchestrator | 2026-01-03 03:00:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:00:32.466209 | orchestrator | 2026-01-03 03:00:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:35.520629 | orchestrator | 2026-01-03 03:00:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:00:35.523888 | orchestrator | 2026-01-03 03:00:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:00:35.523946 | orchestrator | 2026-01-03 03:00:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:38.579148 | orchestrator | 2026-01-03 03:00:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:00:38.581024 | orchestrator | 2026-01-03 03:00:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:00:38.581185 | orchestrator | 2026-01-03 03:00:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:41.631946 | orchestrator | 2026-01-03 03:00:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:00:41.633803 | orchestrator | 2026-01-03 03:00:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:00:41.634105 | orchestrator | 2026-01-03 03:00:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:44.686441 | orchestrator | 2026-01-03 03:00:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:00:44.686513 | orchestrator | 2026-01-03 03:00:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:00:44.686584 | orchestrator | 2026-01-03 03:00:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:47.738070 | orchestrator | 2026-01-03 03:00:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:00:47.738863 | orchestrator | 2026-01-03 03:00:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:00:47.738957 | orchestrator | 2026-01-03 03:00:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:50.789867 | orchestrator | 2026-01-03 03:00:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:00:50.792761 | orchestrator | 2026-01-03 03:00:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:00:50.792812 | orchestrator | 2026-01-03 03:00:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:53.840111 | orchestrator | 2026-01-03 03:00:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:00:53.842258 | orchestrator | 2026-01-03 03:00:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:00:53.842383 | orchestrator | 2026-01-03 03:00:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:56.888538 | orchestrator | 2026-01-03 03:00:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:00:56.889786 | orchestrator | 2026-01-03 03:00:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:00:56.889823 | orchestrator | 2026-01-03 03:00:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:00:59.938320 | orchestrator | 2026-01-03 03:00:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:00:59.939248 | orchestrator | 2026-01-03 03:00:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:00:59.939313 | orchestrator | 2026-01-03 03:00:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:02.985189 | orchestrator | 2026-01-03 03:01:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:01:02.986919 | orchestrator | 2026-01-03 03:01:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:01:02.987032 | orchestrator | 2026-01-03 03:01:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:06.040536 | orchestrator | 2026-01-03 03:01:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:01:06.042421 | orchestrator | 2026-01-03 03:01:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:01:06.042496 | orchestrator | 2026-01-03 03:01:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:09.077337 | orchestrator | 2026-01-03 03:01:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:01:09.077981 | orchestrator | 2026-01-03 03:01:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:01:09.077996 | orchestrator | 2026-01-03 03:01:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:12.127350 | orchestrator | 2026-01-03 03:01:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:01:12.128622 | orchestrator | 2026-01-03 03:01:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:01:12.128689 | orchestrator | 2026-01-03 03:01:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:15.176396 | orchestrator | 2026-01-03 03:01:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:01:15.178126 | orchestrator | 2026-01-03 03:01:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:01:15.178186 | orchestrator | 2026-01-03 03:01:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:18.229168 | orchestrator | 2026-01-03 03:01:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:01:18.230708 | orchestrator | 2026-01-03 03:01:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:01:18.230759 | orchestrator | 2026-01-03 03:01:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:21.279994 | orchestrator | 2026-01-03 03:01:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:01:21.281855 | orchestrator | 2026-01-03 03:01:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:01:21.281905 | orchestrator | 2026-01-03 03:01:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:24.328809 | orchestrator | 2026-01-03 03:01:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:01:24.330296 | orchestrator | 2026-01-03 03:01:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:01:24.330445 | orchestrator | 2026-01-03 03:01:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:27.375994 | orchestrator | 2026-01-03 03:01:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:01:27.377466 | orchestrator | 2026-01-03 03:01:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:01:27.377548 | orchestrator | 2026-01-03 03:01:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:30.424341 | orchestrator | 2026-01-03 03:01:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:01:30.426615 | orchestrator | 2026-01-03 03:01:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:01:30.426895 | orchestrator | 2026-01-03 03:01:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:33.477724 | orchestrator | 2026-01-03 03:01:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:01:33.479184 | orchestrator | 2026-01-03 03:01:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:01:33.479260 | orchestrator | 2026-01-03 03:01:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:36.531376 | orchestrator | 2026-01-03 03:01:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:01:36.532710 | orchestrator | 2026-01-03 03:01:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:01:36.532762 | orchestrator | 2026-01-03 03:01:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:39.584153 | orchestrator | 2026-01-03 03:01:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:01:39.586087 | orchestrator | 2026-01-03 03:01:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:01:39.586145 | orchestrator | 2026-01-03 03:01:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:42.640017 | orchestrator | 2026-01-03 03:01:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:01:42.641563 | orchestrator | 2026-01-03 03:01:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:01:42.641632 | orchestrator | 2026-01-03 03:01:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:45.689077 | orchestrator | 2026-01-03 03:01:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:01:45.690668 | orchestrator | 2026-01-03 03:01:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:01:45.690704 | orchestrator | 2026-01-03 03:01:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:48.737656 | orchestrator | 2026-01-03 03:01:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:01:48.738954 | orchestrator | 2026-01-03 03:01:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:01:48.739074 | orchestrator | 2026-01-03 03:01:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:51.789188 | orchestrator | 2026-01-03 03:01:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:01:51.791328 | orchestrator | 2026-01-03 03:01:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:01:51.791445 | orchestrator | 2026-01-03 03:01:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:54.837353 | orchestrator | 2026-01-03 03:01:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:01:54.840760 | orchestrator | 2026-01-03 03:01:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:01:54.840859 | orchestrator | 2026-01-03 03:01:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:01:57.889047 | orchestrator | 2026-01-03 03:01:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:01:57.890718 | orchestrator | 2026-01-03 03:01:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:01:57.890758 | orchestrator | 2026-01-03 03:01:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:00.938929 | orchestrator | 2026-01-03 03:02:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:02:00.939895 | orchestrator | 2026-01-03 03:02:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:02:00.939986 | orchestrator | 2026-01-03 03:02:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:03.986345 | orchestrator | 2026-01-03 03:02:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:02:03.987259 | orchestrator | 2026-01-03 03:02:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:02:03.987291 | orchestrator | 2026-01-03 03:02:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:07.034356 | orchestrator | 2026-01-03 03:02:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:02:07.034447 | orchestrator | 2026-01-03 03:02:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:02:07.034457 | orchestrator | 2026-01-03 03:02:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:10.079251 | orchestrator | 2026-01-03 03:02:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:02:10.079961 | orchestrator | 2026-01-03 03:02:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:02:10.080013 | orchestrator | 2026-01-03 03:02:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:13.132283 | orchestrator | 2026-01-03 03:02:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:02:13.132359 | orchestrator | 2026-01-03 03:02:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:02:13.132367 | orchestrator | 2026-01-03 03:02:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:16.174010 | orchestrator | 2026-01-03 03:02:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:02:16.174281 | orchestrator | 2026-01-03 03:02:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:02:16.174353 | orchestrator | 2026-01-03 03:02:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:19.221064 | orchestrator | 2026-01-03 03:02:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:02:19.222869 | orchestrator | 2026-01-03 03:02:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:02:19.222920 | orchestrator | 2026-01-03 03:02:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:22.273885 | orchestrator | 2026-01-03 03:02:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:02:22.275992 | orchestrator | 2026-01-03 03:02:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:02:22.276051 | orchestrator | 2026-01-03 03:02:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:25.321579 | orchestrator | 2026-01-03 03:02:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:02:25.321781 | orchestrator | 2026-01-03 03:02:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:02:25.321816 | orchestrator | 2026-01-03 03:02:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:28.368830 | orchestrator | 2026-01-03 03:02:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:02:28.370309 | orchestrator | 2026-01-03 03:02:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:02:28.370396 | orchestrator | 2026-01-03 03:02:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:31.424322 | orchestrator | 2026-01-03 03:02:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:02:31.426404 | orchestrator | 2026-01-03 03:02:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:02:31.426537 | orchestrator | 2026-01-03 03:02:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:34.477042 | orchestrator | 2026-01-03 03:02:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:02:34.479360 | orchestrator | 2026-01-03 03:02:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:02:34.479589 | orchestrator | 2026-01-03 03:02:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:37.534780 | orchestrator | 2026-01-03 03:02:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:02:37.535686 | orchestrator | 2026-01-03 03:02:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:02:37.535769 | orchestrator | 2026-01-03 03:02:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:40.586699 | orchestrator | 2026-01-03 03:02:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:02:40.588807 | orchestrator | 2026-01-03 03:02:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:02:40.588915 | orchestrator | 2026-01-03 03:02:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:43.641395 | orchestrator | 2026-01-03 03:02:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:02:43.644332 | orchestrator | 2026-01-03 03:02:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:02:43.644395 | orchestrator | 2026-01-03 03:02:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:46.687172 | orchestrator | 2026-01-03 03:02:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:02:46.687307 | orchestrator | 2026-01-03 03:02:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:02:46.687320 | orchestrator | 2026-01-03 03:02:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:49.735729 | orchestrator | 2026-01-03 03:02:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:02:49.739030 | orchestrator | 2026-01-03 03:02:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:02:49.739113 | orchestrator | 2026-01-03 03:02:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:52.783255 | orchestrator | 2026-01-03 03:02:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:02:52.783950 | orchestrator | 2026-01-03 03:02:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:02:52.784192 | orchestrator | 2026-01-03 03:02:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:55.828946 | orchestrator | 2026-01-03 03:02:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:02:55.830278 | orchestrator | 2026-01-03 03:02:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:02:55.830364 | orchestrator | 2026-01-03 03:02:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:02:58.876660 | orchestrator | 2026-01-03 03:02:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:02:58.878302 | orchestrator | 2026-01-03 03:02:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:02:58.878482 | orchestrator | 2026-01-03 03:02:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:01.925131 | orchestrator | 2026-01-03 03:03:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:03:01.927780 | orchestrator | 2026-01-03 03:03:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:03:01.927832 | orchestrator | 2026-01-03 03:03:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:04.972287 | orchestrator | 2026-01-03 03:03:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:03:04.972768 | orchestrator | 2026-01-03 03:03:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:03:04.972806 | orchestrator | 2026-01-03 03:03:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:08.028818 | orchestrator | 2026-01-03 03:03:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:03:08.030372 | orchestrator | 2026-01-03 03:03:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:03:08.030432 | orchestrator | 2026-01-03 03:03:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:11.076313 | orchestrator | 2026-01-03 03:03:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:03:11.077696 | orchestrator | 2026-01-03 03:03:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:03:11.077776 | orchestrator | 2026-01-03 03:03:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:14.125829 | orchestrator | 2026-01-03 03:03:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:03:14.126313 | orchestrator | 2026-01-03 03:03:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:03:14.126396 | orchestrator | 2026-01-03 03:03:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:17.173016 | orchestrator | 2026-01-03 03:03:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:03:17.175864 | orchestrator | 2026-01-03 03:03:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:03:17.176026 | orchestrator | 2026-01-03 03:03:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:20.223840 | orchestrator | 2026-01-03 03:03:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:03:20.225852 | orchestrator | 2026-01-03 03:03:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:03:20.225915 | orchestrator | 2026-01-03 03:03:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:23.270150 | orchestrator | 2026-01-03 03:03:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:03:23.272779 | orchestrator | 2026-01-03 03:03:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:03:23.272854 | orchestrator | 2026-01-03 03:03:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:26.317868 | orchestrator | 2026-01-03 03:03:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:03:26.320369 | orchestrator | 2026-01-03 03:03:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:03:26.320422 | orchestrator | 2026-01-03 03:03:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:29.368966 | orchestrator | 2026-01-03 03:03:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:03:29.369886 | orchestrator | 2026-01-03 03:03:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:03:29.369949 | orchestrator | 2026-01-03 03:03:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:32.419203 | orchestrator | 2026-01-03 03:03:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:03:32.419881 | orchestrator | 2026-01-03 03:03:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:03:32.419963 | orchestrator | 2026-01-03 03:03:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:35.465872 | orchestrator | 2026-01-03 03:03:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:03:35.467489 | orchestrator | 2026-01-03 03:03:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:03:35.467718 | orchestrator | 2026-01-03 03:03:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:38.519551 | orchestrator | 2026-01-03 03:03:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:03:38.523626 | orchestrator | 2026-01-03 03:03:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:03:38.523688 | orchestrator | 2026-01-03 03:03:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:41.576652 | orchestrator | 2026-01-03 03:03:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:03:41.577812 | orchestrator | 2026-01-03 03:03:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:03:41.577865 | orchestrator | 2026-01-03 03:03:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:44.627756 | orchestrator | 2026-01-03 03:03:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:03:44.629479 | orchestrator | 2026-01-03 03:03:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:03:44.629558 | orchestrator | 2026-01-03 03:03:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:47.677054 | orchestrator | 2026-01-03 03:03:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:03:47.677327 | orchestrator | 2026-01-03 03:03:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:03:47.677342 | orchestrator | 2026-01-03 03:03:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:50.718356 | orchestrator | 2026-01-03 03:03:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:03:50.718990 | orchestrator | 2026-01-03 03:03:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:03:50.719014 | orchestrator | 2026-01-03 03:03:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:53.767633 | orchestrator | 2026-01-03 03:03:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:03:53.770159 | orchestrator | 2026-01-03 03:03:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:03:53.770246 | orchestrator | 2026-01-03 03:03:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:56.817911 | orchestrator | 2026-01-03 03:03:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:03:56.819080 | orchestrator | 2026-01-03 03:03:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:03:56.819136 | orchestrator | 2026-01-03 03:03:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:03:59.868037 | orchestrator | 2026-01-03 03:03:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:03:59.869237 | orchestrator | 2026-01-03 03:03:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:03:59.869276 | orchestrator | 2026-01-03 03:03:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:02.911886 | orchestrator | 2026-01-03 03:04:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:04:02.913056 | orchestrator | 2026-01-03 03:04:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:04:02.913092 | orchestrator | 2026-01-03 03:04:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:05.955974 | orchestrator | 2026-01-03 03:04:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:04:05.956106 | orchestrator | 2026-01-03 03:04:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:04:05.956207 | orchestrator | 2026-01-03 03:04:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:08.998243 | orchestrator | 2026-01-03 03:04:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:04:08.999906 | orchestrator | 2026-01-03 03:04:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:04:09.000043 | orchestrator | 2026-01-03 03:04:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:12.052784 | orchestrator | 2026-01-03 03:04:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:04:12.053198 | orchestrator | 2026-01-03 03:04:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:04:12.053570 | orchestrator | 2026-01-03 03:04:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:15.098734 | orchestrator | 2026-01-03 03:04:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:04:15.100882 | orchestrator | 2026-01-03 03:04:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:04:15.100912 | orchestrator | 2026-01-03 03:04:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:18.143454 | orchestrator | 2026-01-03 03:04:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:04:18.144986 | orchestrator | 2026-01-03 03:04:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:04:18.145071 | orchestrator | 2026-01-03 03:04:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:21.189156 | orchestrator | 2026-01-03 03:04:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:04:21.190918 | orchestrator | 2026-01-03 03:04:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:04:21.191010 | orchestrator | 2026-01-03 03:04:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:24.236592 | orchestrator | 2026-01-03 03:04:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:04:24.236688 | orchestrator | 2026-01-03 03:04:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:04:24.236697 | orchestrator | 2026-01-03 03:04:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:27.280301 | orchestrator | 2026-01-03 03:04:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:04:27.281989 | orchestrator | 2026-01-03 03:04:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:04:27.282088 | orchestrator | 2026-01-03 03:04:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:30.332452 | orchestrator | 2026-01-03 03:04:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:04:30.333835 | orchestrator | 2026-01-03 03:04:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:04:30.333913 | orchestrator | 2026-01-03 03:04:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:33.382339 | orchestrator | 2026-01-03 03:04:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:04:33.384779 | orchestrator | 2026-01-03 03:04:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:04:33.384827 | orchestrator | 2026-01-03 03:04:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:36.431717 | orchestrator | 2026-01-03 03:04:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:04:36.431911 | orchestrator | 2026-01-03 03:04:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:04:36.431925 | orchestrator | 2026-01-03 03:04:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:39.474703 | orchestrator | 2026-01-03 03:04:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:04:39.476083 | orchestrator | 2026-01-03 03:04:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:04:39.476128 | orchestrator | 2026-01-03 03:04:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:42.527653 | orchestrator | 2026-01-03 03:04:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:04:42.528995 | orchestrator | 2026-01-03 03:04:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:04:42.529043 | orchestrator | 2026-01-03 03:04:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:45.577197 | orchestrator | 2026-01-03 03:04:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:04:45.578664 | orchestrator | 2026-01-03 03:04:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:04:45.578711 | orchestrator | 2026-01-03 03:04:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:48.623202 | orchestrator | 2026-01-03 03:04:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:04:48.624770 | orchestrator | 2026-01-03 03:04:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:04:48.624812 | orchestrator | 2026-01-03 03:04:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:51.672832 | orchestrator | 2026-01-03 03:04:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:04:51.673548 | orchestrator | 2026-01-03 03:04:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:04:51.673987 | orchestrator | 2026-01-03 03:04:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:54.720698 | orchestrator | 2026-01-03 03:04:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:04:54.721805 | orchestrator | 2026-01-03 03:04:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:04:54.721825 | orchestrator | 2026-01-03 03:04:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:04:57.779783 | orchestrator | 2026-01-03 03:04:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:04:57.782408 | orchestrator | 2026-01-03 03:04:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:04:57.782490 | orchestrator | 2026-01-03 03:04:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:00.823415 | orchestrator | 2026-01-03 03:05:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:05:00.825956 | orchestrator | 2026-01-03 03:05:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:05:00.826055 | orchestrator | 2026-01-03 03:05:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:03.871885 | orchestrator | 2026-01-03 03:05:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:05:03.872923 | orchestrator | 2026-01-03 03:05:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:05:03.872971 | orchestrator | 2026-01-03 03:05:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:06.920254 | orchestrator | 2026-01-03 03:05:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:05:06.921369 | orchestrator | 2026-01-03 03:05:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:05:06.921409 | orchestrator | 2026-01-03 03:05:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:09.968415 | orchestrator | 2026-01-03 03:05:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:05:09.970485 | orchestrator | 2026-01-03 03:05:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:05:09.970554 | orchestrator | 2026-01-03 03:05:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:13.015957 | orchestrator | 2026-01-03 03:05:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:05:13.018166 | orchestrator | 2026-01-03 03:05:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:05:13.018291 | orchestrator | 2026-01-03 03:05:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:16.063760 | orchestrator | 2026-01-03 03:05:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:05:16.065088 | orchestrator | 2026-01-03 03:05:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:05:16.065125 | orchestrator | 2026-01-03 03:05:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:19.104943 | orchestrator | 2026-01-03 03:05:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:05:19.105852 | orchestrator | 2026-01-03 03:05:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:05:19.105895 | orchestrator | 2026-01-03 03:05:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:22.153555 | orchestrator | 2026-01-03 03:05:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:05:22.153982 | orchestrator | 2026-01-03 03:05:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:05:22.154096 | orchestrator | 2026-01-03 03:05:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:25.201648 | orchestrator | 2026-01-03 03:05:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:05:25.204144 | orchestrator | 2026-01-03 03:05:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:05:25.204222 | orchestrator | 2026-01-03 03:05:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:28.254010 | orchestrator | 2026-01-03 03:05:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:05:28.255171 | orchestrator | 2026-01-03 03:05:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:05:28.255232 | orchestrator | 2026-01-03 03:05:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:31.299890 | orchestrator | 2026-01-03 03:05:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:05:31.301631 | orchestrator | 2026-01-03 03:05:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:05:31.301693 | orchestrator | 2026-01-03 03:05:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:34.350238 | orchestrator | 2026-01-03 03:05:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:05:34.352155 | orchestrator | 2026-01-03 03:05:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:05:34.352572 | orchestrator | 2026-01-03 03:05:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:37.399374 | orchestrator | 2026-01-03 03:05:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:05:37.400600 | orchestrator | 2026-01-03 03:05:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:05:37.400670 | orchestrator | 2026-01-03 03:05:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:40.449378 | orchestrator | 2026-01-03 03:05:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:05:40.450959 | orchestrator | 2026-01-03 03:05:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:05:40.451004 | orchestrator | 2026-01-03 03:05:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:43.500630 | orchestrator | 2026-01-03 03:05:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:05:43.500709 | orchestrator | 2026-01-03 03:05:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:05:43.500716 | orchestrator | 2026-01-03 03:05:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:46.545087 | orchestrator | 2026-01-03 03:05:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:05:46.547049 | orchestrator | 2026-01-03 03:05:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:05:46.547086 | orchestrator | 2026-01-03 03:05:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:49.591772 | orchestrator | 2026-01-03 03:05:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:05:49.593331 | orchestrator | 2026-01-03 03:05:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:05:49.593374 | orchestrator | 2026-01-03 03:05:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:52.640771 | orchestrator | 2026-01-03 03:05:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:05:52.641443 | orchestrator | 2026-01-03 03:05:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:05:52.641486 | orchestrator | 2026-01-03 03:05:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:55.678799 | orchestrator | 2026-01-03 03:05:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:05:55.680341 | orchestrator | 2026-01-03 03:05:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:05:55.680383 | orchestrator | 2026-01-03 03:05:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:05:58.731719 | orchestrator | 2026-01-03 03:05:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:05:58.732621 | orchestrator | 2026-01-03 03:05:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:05:58.732837 | orchestrator | 2026-01-03 03:05:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:01.781820 | orchestrator | 2026-01-03 03:06:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:06:01.783871 | orchestrator | 2026-01-03 03:06:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:06:01.783930 | orchestrator | 2026-01-03 03:06:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:04.835815 | orchestrator | 2026-01-03 03:06:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:06:04.836644 | orchestrator | 2026-01-03 03:06:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:06:04.836686 | orchestrator | 2026-01-03 03:06:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:07.886116 | orchestrator | 2026-01-03 03:06:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:06:07.887192 | orchestrator | 2026-01-03 03:06:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:06:07.887282 | orchestrator | 2026-01-03 03:06:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:10.930254 | orchestrator | 2026-01-03 03:06:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:06:10.932014 | orchestrator | 2026-01-03 03:06:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:06:10.932059 | orchestrator | 2026-01-03 03:06:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:13.982974 | orchestrator | 2026-01-03 03:06:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:06:13.985360 | orchestrator | 2026-01-03 03:06:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:06:13.985408 | orchestrator | 2026-01-03 03:06:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:17.038482 | orchestrator | 2026-01-03 03:06:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:06:17.039775 | orchestrator | 2026-01-03 03:06:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:06:17.039819 | orchestrator | 2026-01-03 03:06:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:20.086701 | orchestrator | 2026-01-03 03:06:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:06:20.088693 | orchestrator | 2026-01-03 03:06:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:06:20.088779 | orchestrator | 2026-01-03 03:06:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:23.133375 | orchestrator | 2026-01-03 03:06:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:06:23.134349 | orchestrator | 2026-01-03 03:06:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:06:23.134389 | orchestrator | 2026-01-03 03:06:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:26.182367 | orchestrator | 2026-01-03 03:06:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:06:26.184224 | orchestrator | 2026-01-03 03:06:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:06:26.184314 | orchestrator | 2026-01-03 03:06:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:29.225711 | orchestrator | 2026-01-03 03:06:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:06:29.227315 | orchestrator | 2026-01-03 03:06:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:06:29.227359 | orchestrator | 2026-01-03 03:06:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:32.279445 | orchestrator | 2026-01-03 03:06:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:06:32.280976 | orchestrator | 2026-01-03 03:06:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:06:32.281026 | orchestrator | 2026-01-03 03:06:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:35.327773 | orchestrator | 2026-01-03 03:06:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:06:35.328414 | orchestrator | 2026-01-03 03:06:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:06:35.328456 | orchestrator | 2026-01-03 03:06:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:38.377838 | orchestrator | 2026-01-03 03:06:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:06:38.379688 | orchestrator | 2026-01-03 03:06:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:06:38.379747 | orchestrator | 2026-01-03 03:06:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:41.423974 | orchestrator | 2026-01-03 03:06:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:06:41.425711 | orchestrator | 2026-01-03 03:06:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:06:41.425757 | orchestrator | 2026-01-03 03:06:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:44.474599 | orchestrator | 2026-01-03 03:06:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:06:44.476085 | orchestrator | 2026-01-03 03:06:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:06:44.476177 | orchestrator | 2026-01-03 03:06:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:47.528357 | orchestrator | 2026-01-03 03:06:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:06:47.528472 | orchestrator | 2026-01-03 03:06:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:06:47.528482 | orchestrator | 2026-01-03 03:06:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:50.573245 | orchestrator | 2026-01-03 03:06:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:06:50.574338 | orchestrator | 2026-01-03 03:06:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:06:50.574436 | orchestrator | 2026-01-03 03:06:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:53.620911 | orchestrator | 2026-01-03 03:06:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:06:53.621923 | orchestrator | 2026-01-03 03:06:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:06:53.621950 | orchestrator | 2026-01-03 03:06:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:56.668402 | orchestrator | 2026-01-03 03:06:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:06:56.668476 | orchestrator | 2026-01-03 03:06:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:06:56.668485 | orchestrator | 2026-01-03 03:06:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:06:59.713484 | orchestrator | 2026-01-03 03:06:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:06:59.715284 | orchestrator | 2026-01-03 03:06:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:06:59.715469 | orchestrator | 2026-01-03 03:06:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:02.770785 | orchestrator | 2026-01-03 03:07:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:07:02.771421 | orchestrator | 2026-01-03 03:07:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:07:02.771633 | orchestrator | 2026-01-03 03:07:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:05.822424 | orchestrator | 2026-01-03 03:07:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:07:05.824379 | orchestrator | 2026-01-03 03:07:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:07:05.824604 | orchestrator | 2026-01-03 03:07:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:08.876205 | orchestrator | 2026-01-03 03:07:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:07:08.878784 | orchestrator | 2026-01-03 03:07:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:07:08.879149 | orchestrator | 2026-01-03 03:07:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:11.927876 | orchestrator | 2026-01-03 03:07:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:07:11.932092 | orchestrator | 2026-01-03 03:07:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:07:11.932147 | orchestrator | 2026-01-03 03:07:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:14.990698 | orchestrator | 2026-01-03 03:07:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:07:14.990767 | orchestrator | 2026-01-03 03:07:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:07:14.990774 | orchestrator | 2026-01-03 03:07:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:18.037064 | orchestrator | 2026-01-03 03:07:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:07:18.039989 | orchestrator | 2026-01-03 03:07:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:07:18.040049 | orchestrator | 2026-01-03 03:07:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:21.080469 | orchestrator | 2026-01-03 03:07:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:07:21.081473 | orchestrator | 2026-01-03 03:07:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:07:21.081515 | orchestrator | 2026-01-03 03:07:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:24.124434 | orchestrator | 2026-01-03 03:07:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:07:24.125504 | orchestrator | 2026-01-03 03:07:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:07:24.125869 | orchestrator | 2026-01-03 03:07:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:27.173643 | orchestrator | 2026-01-03 03:07:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:07:27.175982 | orchestrator | 2026-01-03 03:07:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:07:27.176055 | orchestrator | 2026-01-03 03:07:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:30.224025 | orchestrator | 2026-01-03 03:07:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:07:30.225412 | orchestrator | 2026-01-03 03:07:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:07:30.225456 | orchestrator | 2026-01-03 03:07:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:33.275111 | orchestrator | 2026-01-03 03:07:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:07:33.276718 | orchestrator | 2026-01-03 03:07:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:07:33.276768 | orchestrator | 2026-01-03 03:07:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:36.331073 | orchestrator | 2026-01-03 03:07:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:07:36.331127 | orchestrator | 2026-01-03 03:07:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:07:36.331144 | orchestrator | 2026-01-03 03:07:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:39.376255 | orchestrator | 2026-01-03 03:07:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:07:39.378975 | orchestrator | 2026-01-03 03:07:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:07:39.379029 | orchestrator | 2026-01-03 03:07:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:42.432386 | orchestrator | 2026-01-03 03:07:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:07:42.434392 | orchestrator | 2026-01-03 03:07:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:07:42.434455 | orchestrator | 2026-01-03 03:07:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:45.488161 | orchestrator | 2026-01-03 03:07:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:07:45.489974 | orchestrator | 2026-01-03 03:07:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:07:45.490081 | orchestrator | 2026-01-03 03:07:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:48.537746 | orchestrator | 2026-01-03 03:07:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:07:48.539333 | orchestrator | 2026-01-03 03:07:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:07:48.539440 | orchestrator | 2026-01-03 03:07:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:51.580203 | orchestrator | 2026-01-03 03:07:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:07:51.581021 | orchestrator | 2026-01-03 03:07:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:07:51.581198 | orchestrator | 2026-01-03 03:07:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:54.630214 | orchestrator | 2026-01-03 03:07:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:07:54.631926 | orchestrator | 2026-01-03 03:07:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:07:54.632679 | orchestrator | 2026-01-03 03:07:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:07:57.675773 | orchestrator | 2026-01-03 03:07:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:07:57.678745 | orchestrator | 2026-01-03 03:07:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:07:57.678808 | orchestrator | 2026-01-03 03:07:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:00.722150 | orchestrator | 2026-01-03 03:08:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:08:00.724208 | orchestrator | 2026-01-03 03:08:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:08:00.724315 | orchestrator | 2026-01-03 03:08:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:03.768186 | orchestrator | 2026-01-03 03:08:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:08:03.769813 | orchestrator | 2026-01-03 03:08:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:08:03.769851 | orchestrator | 2026-01-03 03:08:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:06.825447 | orchestrator | 2026-01-03 03:08:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:08:06.826301 | orchestrator | 2026-01-03 03:08:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:08:06.826714 | orchestrator | 2026-01-03 03:08:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:09.878883 | orchestrator | 2026-01-03 03:08:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:08:09.880518 | orchestrator | 2026-01-03 03:08:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:08:09.880614 | orchestrator | 2026-01-03 03:08:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:12.928841 | orchestrator | 2026-01-03 03:08:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:08:12.930172 | orchestrator | 2026-01-03 03:08:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:08:12.930204 | orchestrator | 2026-01-03 03:08:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:15.981294 | orchestrator | 2026-01-03 03:08:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:08:15.982143 | orchestrator | 2026-01-03 03:08:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:08:15.982198 | orchestrator | 2026-01-03 03:08:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:19.029767 | orchestrator | 2026-01-03 03:08:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:08:19.031222 | orchestrator | 2026-01-03 03:08:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:08:19.031306 | orchestrator | 2026-01-03 03:08:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:22.083409 | orchestrator | 2026-01-03 03:08:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:08:22.085188 | orchestrator | 2026-01-03 03:08:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:08:22.085307 | orchestrator | 2026-01-03 03:08:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:25.125944 | orchestrator | 2026-01-03 03:08:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:08:25.126994 | orchestrator | 2026-01-03 03:08:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:08:25.127033 | orchestrator | 2026-01-03 03:08:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:28.175706 | orchestrator | 2026-01-03 03:08:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:08:28.176956 | orchestrator | 2026-01-03 03:08:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:08:28.177589 | orchestrator | 2026-01-03 03:08:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:31.222000 | orchestrator | 2026-01-03 03:08:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:08:31.223875 | orchestrator | 2026-01-03 03:08:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:08:31.223931 | orchestrator | 2026-01-03 03:08:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:34.275394 | orchestrator | 2026-01-03 03:08:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:08:34.278259 | orchestrator | 2026-01-03 03:08:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:08:34.278307 | orchestrator | 2026-01-03 03:08:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:37.330083 | orchestrator | 2026-01-03 03:08:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:08:37.332514 | orchestrator | 2026-01-03 03:08:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:08:37.332683 | orchestrator | 2026-01-03 03:08:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:40.377863 | orchestrator | 2026-01-03 03:08:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:08:40.381444 | orchestrator | 2026-01-03 03:08:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:08:40.382116 | orchestrator | 2026-01-03 03:08:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:43.428313 | orchestrator | 2026-01-03 03:08:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:08:43.430145 | orchestrator | 2026-01-03 03:08:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:08:43.430242 | orchestrator | 2026-01-03 03:08:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:46.491508 | orchestrator | 2026-01-03 03:08:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:08:46.492517 | orchestrator | 2026-01-03 03:08:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:08:46.492585 | orchestrator | 2026-01-03 03:08:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:49.539575 | orchestrator | 2026-01-03 03:08:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:08:49.541530 | orchestrator | 2026-01-03 03:08:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:08:49.541668 | orchestrator | 2026-01-03 03:08:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:52.586414 | orchestrator | 2026-01-03 03:08:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:08:52.586506 | orchestrator | 2026-01-03 03:08:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:08:52.586518 | orchestrator | 2026-01-03 03:08:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:55.621449 | orchestrator | 2026-01-03 03:08:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:08:55.623854 | orchestrator | 2026-01-03 03:08:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:08:55.623940 | orchestrator | 2026-01-03 03:08:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:08:58.678998 | orchestrator | 2026-01-03 03:08:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:08:58.680088 | orchestrator | 2026-01-03 03:08:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:08:58.680186 | orchestrator | 2026-01-03 03:08:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:01.727727 | orchestrator | 2026-01-03 03:09:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:09:01.727940 | orchestrator | 2026-01-03 03:09:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:09:01.728005 | orchestrator | 2026-01-03 03:09:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:04.774100 | orchestrator | 2026-01-03 03:09:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:09:04.775155 | orchestrator | 2026-01-03 03:09:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:09:04.775394 | orchestrator | 2026-01-03 03:09:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:07.824942 | orchestrator | 2026-01-03 03:09:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:09:07.826455 | orchestrator | 2026-01-03 03:09:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:09:07.826558 | orchestrator | 2026-01-03 03:09:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:10.874718 | orchestrator | 2026-01-03 03:09:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:09:10.877161 | orchestrator | 2026-01-03 03:09:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:09:10.877268 | orchestrator | 2026-01-03 03:09:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:13.918289 | orchestrator | 2026-01-03 03:09:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:09:13.919197 | orchestrator | 2026-01-03 03:09:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:09:13.919217 | orchestrator | 2026-01-03 03:09:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:16.966734 | orchestrator | 2026-01-03 03:09:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:09:16.968236 | orchestrator | 2026-01-03 03:09:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:09:16.968283 | orchestrator | 2026-01-03 03:09:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:20.027628 | orchestrator | 2026-01-03 03:09:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:09:20.029219 | orchestrator | 2026-01-03 03:09:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:09:20.029294 | orchestrator | 2026-01-03 03:09:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:23.083824 | orchestrator | 2026-01-03 03:09:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:09:23.083906 | orchestrator | 2026-01-03 03:09:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:09:23.083913 | orchestrator | 2026-01-03 03:09:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:26.137235 | orchestrator | 2026-01-03 03:09:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:09:26.139953 | orchestrator | 2026-01-03 03:09:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:09:26.140024 | orchestrator | 2026-01-03 03:09:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:29.177633 | orchestrator | 2026-01-03 03:09:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:09:29.179780 | orchestrator | 2026-01-03 03:09:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:09:29.179825 | orchestrator | 2026-01-03 03:09:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:32.232025 | orchestrator | 2026-01-03 03:09:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:09:32.235479 | orchestrator | 2026-01-03 03:09:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:09:32.235595 | orchestrator | 2026-01-03 03:09:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:35.280465 | orchestrator | 2026-01-03 03:09:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:09:35.281832 | orchestrator | 2026-01-03 03:09:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:09:35.281862 | orchestrator | 2026-01-03 03:09:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:38.333996 | orchestrator | 2026-01-03 03:09:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:09:38.334806 | orchestrator | 2026-01-03 03:09:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:09:38.334837 | orchestrator | 2026-01-03 03:09:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:41.390535 | orchestrator | 2026-01-03 03:09:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:09:41.392204 | orchestrator | 2026-01-03 03:09:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:09:41.392284 | orchestrator | 2026-01-03 03:09:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:44.445776 | orchestrator | 2026-01-03 03:09:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:09:44.446121 | orchestrator | 2026-01-03 03:09:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:09:44.446262 | orchestrator | 2026-01-03 03:09:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:47.493675 | orchestrator | 2026-01-03 03:09:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:09:47.496006 | orchestrator | 2026-01-03 03:09:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:09:47.496105 | orchestrator | 2026-01-03 03:09:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:50.540178 | orchestrator | 2026-01-03 03:09:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:09:50.542333 | orchestrator | 2026-01-03 03:09:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:09:50.542538 | orchestrator | 2026-01-03 03:09:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:53.595279 | orchestrator | 2026-01-03 03:09:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:09:53.595384 | orchestrator | 2026-01-03 03:09:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:09:53.595400 | orchestrator | 2026-01-03 03:09:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:56.649841 | orchestrator | 2026-01-03 03:09:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:09:56.650790 | orchestrator | 2026-01-03 03:09:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:09:56.650849 | orchestrator | 2026-01-03 03:09:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:09:59.700133 | orchestrator | 2026-01-03 03:09:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:09:59.700505 | orchestrator | 2026-01-03 03:09:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:09:59.700527 | orchestrator | 2026-01-03 03:09:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:02.744118 | orchestrator | 2026-01-03 03:10:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:10:02.744227 | orchestrator | 2026-01-03 03:10:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:10:02.744241 | orchestrator | 2026-01-03 03:10:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:05.789898 | orchestrator | 2026-01-03 03:10:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:10:05.790917 | orchestrator | 2026-01-03 03:10:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:10:05.790968 | orchestrator | 2026-01-03 03:10:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:08.843439 | orchestrator | 2026-01-03 03:10:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:10:08.844237 | orchestrator | 2026-01-03 03:10:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:10:08.844266 | orchestrator | 2026-01-03 03:10:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:11.888165 | orchestrator | 2026-01-03 03:10:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:10:11.890298 | orchestrator | 2026-01-03 03:10:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:10:11.890361 | orchestrator | 2026-01-03 03:10:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:14.933021 | orchestrator | 2026-01-03 03:10:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:10:14.934290 | orchestrator | 2026-01-03 03:10:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:10:14.934330 | orchestrator | 2026-01-03 03:10:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:17.978817 | orchestrator | 2026-01-03 03:10:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:10:17.980184 | orchestrator | 2026-01-03 03:10:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:10:17.980234 | orchestrator | 2026-01-03 03:10:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:21.044746 | orchestrator | 2026-01-03 03:10:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:10:21.046400 | orchestrator | 2026-01-03 03:10:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:10:21.046480 | orchestrator | 2026-01-03 03:10:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:24.094483 | orchestrator | 2026-01-03 03:10:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:10:24.096169 | orchestrator | 2026-01-03 03:10:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:10:24.096228 | orchestrator | 2026-01-03 03:10:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:27.143466 | orchestrator | 2026-01-03 03:10:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:10:27.145257 | orchestrator | 2026-01-03 03:10:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:10:27.145297 | orchestrator | 2026-01-03 03:10:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:30.192842 | orchestrator | 2026-01-03 03:10:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:10:30.194901 | orchestrator | 2026-01-03 03:10:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:10:30.195085 | orchestrator | 2026-01-03 03:10:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:33.244673 | orchestrator | 2026-01-03 03:10:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:10:33.246415 | orchestrator | 2026-01-03 03:10:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:10:33.246656 | orchestrator | 2026-01-03 03:10:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:36.302693 | orchestrator | 2026-01-03 03:10:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:10:36.302774 | orchestrator | 2026-01-03 03:10:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:10:36.302781 | orchestrator | 2026-01-03 03:10:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:39.341241 | orchestrator | 2026-01-03 03:10:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:10:39.342911 | orchestrator | 2026-01-03 03:10:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:10:39.342993 | orchestrator | 2026-01-03 03:10:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:42.386801 | orchestrator | 2026-01-03 03:10:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:10:42.389498 | orchestrator | 2026-01-03 03:10:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:10:42.389606 | orchestrator | 2026-01-03 03:10:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:45.440424 | orchestrator | 2026-01-03 03:10:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:10:45.442908 | orchestrator | 2026-01-03 03:10:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:10:45.443026 | orchestrator | 2026-01-03 03:10:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:48.486770 | orchestrator | 2026-01-03 03:10:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:10:48.488430 | orchestrator | 2026-01-03 03:10:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:10:48.489766 | orchestrator | 2026-01-03 03:10:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:51.534170 | orchestrator | 2026-01-03 03:10:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:10:51.534903 | orchestrator | 2026-01-03 03:10:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:10:51.534935 | orchestrator | 2026-01-03 03:10:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:54.586941 | orchestrator | 2026-01-03 03:10:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:10:54.589061 | orchestrator | 2026-01-03 03:10:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:10:54.589169 | orchestrator | 2026-01-03 03:10:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:10:57.635071 | orchestrator | 2026-01-03 03:10:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:10:57.637279 | orchestrator | 2026-01-03 03:10:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:10:57.637350 | orchestrator | 2026-01-03 03:10:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:00.684915 | orchestrator | 2026-01-03 03:11:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:11:00.685087 | orchestrator | 2026-01-03 03:11:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:11:00.685104 | orchestrator | 2026-01-03 03:11:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:03.736654 | orchestrator | 2026-01-03 03:11:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:11:03.744675 | orchestrator | 2026-01-03 03:11:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:11:03.744736 | orchestrator | 2026-01-03 03:11:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:06.779624 | orchestrator | 2026-01-03 03:11:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:11:06.781090 | orchestrator | 2026-01-03 03:11:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:11:06.781513 | orchestrator | 2026-01-03 03:11:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:09.832422 | orchestrator | 2026-01-03 03:11:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:11:09.849982 | orchestrator | 2026-01-03 03:11:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:11:09.850099 | orchestrator | 2026-01-03 03:11:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:12.883280 | orchestrator | 2026-01-03 03:11:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:11:12.885459 | orchestrator | 2026-01-03 03:11:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:11:12.885519 | orchestrator | 2026-01-03 03:11:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:15.932598 | orchestrator | 2026-01-03 03:11:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:11:15.932681 | orchestrator | 2026-01-03 03:11:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:11:15.932713 | orchestrator | 2026-01-03 03:11:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:18.982772 | orchestrator | 2026-01-03 03:11:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:11:18.984953 | orchestrator | 2026-01-03 03:11:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:11:18.985047 | orchestrator | 2026-01-03 03:11:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:22.034420 | orchestrator | 2026-01-03 03:11:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:11:22.035194 | orchestrator | 2026-01-03 03:11:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:11:22.035250 | orchestrator | 2026-01-03 03:11:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:25.080603 | orchestrator | 2026-01-03 03:11:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:11:25.081998 | orchestrator | 2026-01-03 03:11:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:11:25.082072 | orchestrator | 2026-01-03 03:11:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:28.131598 | orchestrator | 2026-01-03 03:11:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:11:28.134129 | orchestrator | 2026-01-03 03:11:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:11:28.134204 | orchestrator | 2026-01-03 03:11:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:31.179861 | orchestrator | 2026-01-03 03:11:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:11:31.180906 | orchestrator | 2026-01-03 03:11:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:11:31.180940 | orchestrator | 2026-01-03 03:11:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:34.232226 | orchestrator | 2026-01-03 03:11:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:11:34.233971 | orchestrator | 2026-01-03 03:11:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:11:34.234051 | orchestrator | 2026-01-03 03:11:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:37.282982 | orchestrator | 2026-01-03 03:11:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:11:37.284011 | orchestrator | 2026-01-03 03:11:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:11:37.284121 | orchestrator | 2026-01-03 03:11:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:40.330724 | orchestrator | 2026-01-03 03:11:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:11:40.331155 | orchestrator | 2026-01-03 03:11:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:11:40.331644 | orchestrator | 2026-01-03 03:11:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:43.377040 | orchestrator | 2026-01-03 03:11:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:11:43.377739 | orchestrator | 2026-01-03 03:11:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:11:43.377772 | orchestrator | 2026-01-03 03:11:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:46.418502 | orchestrator | 2026-01-03 03:11:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:11:46.418769 | orchestrator | 2026-01-03 03:11:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:11:46.418931 | orchestrator | 2026-01-03 03:11:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:49.469002 | orchestrator | 2026-01-03 03:11:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:11:49.471445 | orchestrator | 2026-01-03 03:11:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:11:49.471519 | orchestrator | 2026-01-03 03:11:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:52.518440 | orchestrator | 2026-01-03 03:11:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:11:52.519897 | orchestrator | 2026-01-03 03:11:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:11:52.519935 | orchestrator | 2026-01-03 03:11:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:55.565282 | orchestrator | 2026-01-03 03:11:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:11:55.566452 | orchestrator | 2026-01-03 03:11:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:11:55.566507 | orchestrator | 2026-01-03 03:11:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:11:58.620356 | orchestrator | 2026-01-03 03:11:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:11:58.622446 | orchestrator | 2026-01-03 03:11:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:11:58.622502 | orchestrator | 2026-01-03 03:11:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:01.668064 | orchestrator | 2026-01-03 03:12:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:12:01.670138 | orchestrator | 2026-01-03 03:12:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:12:01.670175 | orchestrator | 2026-01-03 03:12:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:04.720454 | orchestrator | 2026-01-03 03:12:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:12:04.722389 | orchestrator | 2026-01-03 03:12:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:12:04.722544 | orchestrator | 2026-01-03 03:12:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:07.771197 | orchestrator | 2026-01-03 03:12:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:12:07.773237 | orchestrator | 2026-01-03 03:12:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:12:07.773316 | orchestrator | 2026-01-03 03:12:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:10.820910 | orchestrator | 2026-01-03 03:12:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:12:10.825697 | orchestrator | 2026-01-03 03:12:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:12:10.825762 | orchestrator | 2026-01-03 03:12:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:13.866279 | orchestrator | 2026-01-03 03:12:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:12:13.868096 | orchestrator | 2026-01-03 03:12:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:12:13.868143 | orchestrator | 2026-01-03 03:12:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:16.915219 | orchestrator | 2026-01-03 03:12:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:12:16.917836 | orchestrator | 2026-01-03 03:12:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:12:16.917914 | orchestrator | 2026-01-03 03:12:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:19.964079 | orchestrator | 2026-01-03 03:12:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:12:19.965288 | orchestrator | 2026-01-03 03:12:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:12:19.965326 | orchestrator | 2026-01-03 03:12:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:23.009101 | orchestrator | 2026-01-03 03:12:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:12:23.011040 | orchestrator | 2026-01-03 03:12:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:12:23.011126 | orchestrator | 2026-01-03 03:12:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:26.055901 | orchestrator | 2026-01-03 03:12:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:12:26.056973 | orchestrator | 2026-01-03 03:12:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:12:26.057019 | orchestrator | 2026-01-03 03:12:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:29.108701 | orchestrator | 2026-01-03 03:12:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:12:29.110398 | orchestrator | 2026-01-03 03:12:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:12:29.110630 | orchestrator | 2026-01-03 03:12:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:32.155991 | orchestrator | 2026-01-03 03:12:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:12:32.157226 | orchestrator | 2026-01-03 03:12:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:12:32.157259 | orchestrator | 2026-01-03 03:12:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:35.196781 | orchestrator | 2026-01-03 03:12:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:12:35.199453 | orchestrator | 2026-01-03 03:12:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:12:35.199515 | orchestrator | 2026-01-03 03:12:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:38.242472 | orchestrator | 2026-01-03 03:12:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:12:38.243881 | orchestrator | 2026-01-03 03:12:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:12:38.243923 | orchestrator | 2026-01-03 03:12:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:41.288438 | orchestrator | 2026-01-03 03:12:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:12:41.292385 | orchestrator | 2026-01-03 03:12:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:12:41.292766 | orchestrator | 2026-01-03 03:12:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:44.347877 | orchestrator | 2026-01-03 03:12:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:12:44.348192 | orchestrator | 2026-01-03 03:12:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:12:44.348608 | orchestrator | 2026-01-03 03:12:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:47.397675 | orchestrator | 2026-01-03 03:12:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:12:47.400560 | orchestrator | 2026-01-03 03:12:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:12:47.400698 | orchestrator | 2026-01-03 03:12:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:50.450213 | orchestrator | 2026-01-03 03:12:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:12:50.452156 | orchestrator | 2026-01-03 03:12:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:12:50.452208 | orchestrator | 2026-01-03 03:12:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:53.501352 | orchestrator | 2026-01-03 03:12:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:12:53.503491 | orchestrator | 2026-01-03 03:12:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:12:53.503699 | orchestrator | 2026-01-03 03:12:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:56.552214 | orchestrator | 2026-01-03 03:12:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:12:56.552347 | orchestrator | 2026-01-03 03:12:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:12:56.552365 | orchestrator | 2026-01-03 03:12:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:12:59.604163 | orchestrator | 2026-01-03 03:12:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:12:59.606287 | orchestrator | 2026-01-03 03:12:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:12:59.606346 | orchestrator | 2026-01-03 03:12:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:02.659293 | orchestrator | 2026-01-03 03:13:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:13:02.660131 | orchestrator | 2026-01-03 03:13:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:13:02.660159 | orchestrator | 2026-01-03 03:13:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:05.713113 | orchestrator | 2026-01-03 03:13:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:13:05.714977 | orchestrator | 2026-01-03 03:13:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:13:05.715084 | orchestrator | 2026-01-03 03:13:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:08.768584 | orchestrator | 2026-01-03 03:13:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:13:08.770391 | orchestrator | 2026-01-03 03:13:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:13:08.770457 | orchestrator | 2026-01-03 03:13:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:11.815016 | orchestrator | 2026-01-03 03:13:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:13:11.816456 | orchestrator | 2026-01-03 03:13:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:13:11.816658 | orchestrator | 2026-01-03 03:13:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:14.855033 | orchestrator | 2026-01-03 03:13:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:13:14.857642 | orchestrator | 2026-01-03 03:13:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:13:14.857757 | orchestrator | 2026-01-03 03:13:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:17.903462 | orchestrator | 2026-01-03 03:13:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:13:17.905582 | orchestrator | 2026-01-03 03:13:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:13:17.905671 | orchestrator | 2026-01-03 03:13:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:20.952395 | orchestrator | 2026-01-03 03:13:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:13:20.953395 | orchestrator | 2026-01-03 03:13:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:13:20.953452 | orchestrator | 2026-01-03 03:13:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:23.995297 | orchestrator | 2026-01-03 03:13:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:13:23.996186 | orchestrator | 2026-01-03 03:13:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:13:23.996254 | orchestrator | 2026-01-03 03:13:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:27.040698 | orchestrator | 2026-01-03 03:13:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:13:27.042328 | orchestrator | 2026-01-03 03:13:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:13:27.042394 | orchestrator | 2026-01-03 03:13:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:30.083170 | orchestrator | 2026-01-03 03:13:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:13:30.083400 | orchestrator | 2026-01-03 03:13:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:13:30.083424 | orchestrator | 2026-01-03 03:13:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:33.131923 | orchestrator | 2026-01-03 03:13:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:13:33.132876 | orchestrator | 2026-01-03 03:13:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:13:33.132904 | orchestrator | 2026-01-03 03:13:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:36.181640 | orchestrator | 2026-01-03 03:13:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:13:36.183065 | orchestrator | 2026-01-03 03:13:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:13:36.183133 | orchestrator | 2026-01-03 03:13:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:39.227206 | orchestrator | 2026-01-03 03:13:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:13:39.228905 | orchestrator | 2026-01-03 03:13:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:13:39.229018 | orchestrator | 2026-01-03 03:13:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:42.292702 | orchestrator | 2026-01-03 03:13:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:13:42.292779 | orchestrator | 2026-01-03 03:13:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:13:42.292787 | orchestrator | 2026-01-03 03:13:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:45.320189 | orchestrator | 2026-01-03 03:13:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:13:45.322765 | orchestrator | 2026-01-03 03:13:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:13:45.322872 | orchestrator | 2026-01-03 03:13:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:48.378190 | orchestrator | 2026-01-03 03:13:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:13:48.379295 | orchestrator | 2026-01-03 03:13:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:13:48.379380 | orchestrator | 2026-01-03 03:13:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:51.430932 | orchestrator | 2026-01-03 03:13:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:13:51.432502 | orchestrator | 2026-01-03 03:13:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:13:51.432547 | orchestrator | 2026-01-03 03:13:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:54.481105 | orchestrator | 2026-01-03 03:13:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:13:54.483230 | orchestrator | 2026-01-03 03:13:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:13:54.483269 | orchestrator | 2026-01-03 03:13:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:13:57.529314 | orchestrator | 2026-01-03 03:13:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:13:57.531605 | orchestrator | 2026-01-03 03:13:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:13:57.531691 | orchestrator | 2026-01-03 03:13:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:00.578403 | orchestrator | 2026-01-03 03:14:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:14:00.582338 | orchestrator | 2026-01-03 03:14:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:14:00.582417 | orchestrator | 2026-01-03 03:14:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:03.631993 | orchestrator | 2026-01-03 03:14:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:14:03.632157 | orchestrator | 2026-01-03 03:14:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:14:03.632176 | orchestrator | 2026-01-03 03:14:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:06.684780 | orchestrator | 2026-01-03 03:14:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:14:06.688148 | orchestrator | 2026-01-03 03:14:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:14:06.688245 | orchestrator | 2026-01-03 03:14:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:09.732888 | orchestrator | 2026-01-03 03:14:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:14:09.734430 | orchestrator | 2026-01-03 03:14:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:14:09.734479 | orchestrator | 2026-01-03 03:14:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:12.777734 | orchestrator | 2026-01-03 03:14:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:14:12.779287 | orchestrator | 2026-01-03 03:14:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:14:12.779346 | orchestrator | 2026-01-03 03:14:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:15.826165 | orchestrator | 2026-01-03 03:14:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:14:15.829159 | orchestrator | 2026-01-03 03:14:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:14:15.829219 | orchestrator | 2026-01-03 03:14:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:18.880126 | orchestrator | 2026-01-03 03:14:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:14:18.881368 | orchestrator | 2026-01-03 03:14:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:14:18.881458 | orchestrator | 2026-01-03 03:14:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:21.933679 | orchestrator | 2026-01-03 03:14:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:14:21.934418 | orchestrator | 2026-01-03 03:14:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:14:21.934449 | orchestrator | 2026-01-03 03:14:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:24.978866 | orchestrator | 2026-01-03 03:14:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:14:24.980385 | orchestrator | 2026-01-03 03:14:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:14:24.980484 | orchestrator | 2026-01-03 03:14:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:28.027022 | orchestrator | 2026-01-03 03:14:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:14:28.029352 | orchestrator | 2026-01-03 03:14:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:14:28.029461 | orchestrator | 2026-01-03 03:14:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:31.074687 | orchestrator | 2026-01-03 03:14:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:14:31.077308 | orchestrator | 2026-01-03 03:14:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:14:31.077430 | orchestrator | 2026-01-03 03:14:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:34.122458 | orchestrator | 2026-01-03 03:14:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:14:34.125198 | orchestrator | 2026-01-03 03:14:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:14:34.125261 | orchestrator | 2026-01-03 03:14:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:37.171189 | orchestrator | 2026-01-03 03:14:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:14:37.172062 | orchestrator | 2026-01-03 03:14:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:14:37.172105 | orchestrator | 2026-01-03 03:14:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:40.218407 | orchestrator | 2026-01-03 03:14:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:14:40.219784 | orchestrator | 2026-01-03 03:14:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:14:40.219838 | orchestrator | 2026-01-03 03:14:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:43.267483 | orchestrator | 2026-01-03 03:14:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:14:43.269734 | orchestrator | 2026-01-03 03:14:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:14:43.269838 | orchestrator | 2026-01-03 03:14:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:46.321750 | orchestrator | 2026-01-03 03:14:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:14:46.323740 | orchestrator | 2026-01-03 03:14:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:14:46.323843 | orchestrator | 2026-01-03 03:14:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:49.374192 | orchestrator | 2026-01-03 03:14:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:14:49.374944 | orchestrator | 2026-01-03 03:14:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:14:49.374980 | orchestrator | 2026-01-03 03:14:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:52.430322 | orchestrator | 2026-01-03 03:14:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:14:52.432288 | orchestrator | 2026-01-03 03:14:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:14:52.432379 | orchestrator | 2026-01-03 03:14:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:55.477508 | orchestrator | 2026-01-03 03:14:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:14:55.478593 | orchestrator | 2026-01-03 03:14:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:14:55.478924 | orchestrator | 2026-01-03 03:14:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:14:58.521533 | orchestrator | 2026-01-03 03:14:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:14:58.522946 | orchestrator | 2026-01-03 03:14:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:14:58.522985 | orchestrator | 2026-01-03 03:14:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:01.566377 | orchestrator | 2026-01-03 03:15:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:15:01.568326 | orchestrator | 2026-01-03 03:15:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:15:01.568420 | orchestrator | 2026-01-03 03:15:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:04.620981 | orchestrator | 2026-01-03 03:15:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:15:04.623889 | orchestrator | 2026-01-03 03:15:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:15:04.623954 | orchestrator | 2026-01-03 03:15:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:07.667587 | orchestrator | 2026-01-03 03:15:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:15:07.670697 | orchestrator | 2026-01-03 03:15:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:15:07.670783 | orchestrator | 2026-01-03 03:15:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:10.718653 | orchestrator | 2026-01-03 03:15:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:15:10.720325 | orchestrator | 2026-01-03 03:15:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:15:10.720462 | orchestrator | 2026-01-03 03:15:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:13.774956 | orchestrator | 2026-01-03 03:15:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:15:13.775075 | orchestrator | 2026-01-03 03:15:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:15:13.775220 | orchestrator | 2026-01-03 03:15:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:16.813555 | orchestrator | 2026-01-03 03:15:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:15:16.815068 | orchestrator | 2026-01-03 03:15:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:15:16.815122 | orchestrator | 2026-01-03 03:15:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:19.871180 | orchestrator | 2026-01-03 03:15:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:15:19.871829 | orchestrator | 2026-01-03 03:15:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:15:19.871868 | orchestrator | 2026-01-03 03:15:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:22.917249 | orchestrator | 2026-01-03 03:15:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:15:22.919509 | orchestrator | 2026-01-03 03:15:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:15:22.919590 | orchestrator | 2026-01-03 03:15:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:25.971401 | orchestrator | 2026-01-03 03:15:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:15:25.973463 | orchestrator | 2026-01-03 03:15:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:15:25.973556 | orchestrator | 2026-01-03 03:15:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:29.022811 | orchestrator | 2026-01-03 03:15:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:15:29.022905 | orchestrator | 2026-01-03 03:15:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:15:29.022968 | orchestrator | 2026-01-03 03:15:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:32.071854 | orchestrator | 2026-01-03 03:15:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:15:32.073616 | orchestrator | 2026-01-03 03:15:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:15:32.073702 | orchestrator | 2026-01-03 03:15:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:35.122988 | orchestrator | 2026-01-03 03:15:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:15:35.124203 | orchestrator | 2026-01-03 03:15:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:15:35.124345 | orchestrator | 2026-01-03 03:15:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:38.169789 | orchestrator | 2026-01-03 03:15:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:15:38.171621 | orchestrator | 2026-01-03 03:15:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:15:38.171679 | orchestrator | 2026-01-03 03:15:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:41.223038 | orchestrator | 2026-01-03 03:15:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:15:41.227089 | orchestrator | 2026-01-03 03:15:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:15:41.227221 | orchestrator | 2026-01-03 03:15:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:44.273076 | orchestrator | 2026-01-03 03:15:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:15:44.273245 | orchestrator | 2026-01-03 03:15:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:15:44.273265 | orchestrator | 2026-01-03 03:15:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:47.325914 | orchestrator | 2026-01-03 03:15:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:15:47.327963 | orchestrator | 2026-01-03 03:15:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:15:47.328040 | orchestrator | 2026-01-03 03:15:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:50.378102 | orchestrator | 2026-01-03 03:15:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:15:50.380244 | orchestrator | 2026-01-03 03:15:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:15:50.380377 | orchestrator | 2026-01-03 03:15:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:53.427029 | orchestrator | 2026-01-03 03:15:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:15:53.430128 | orchestrator | 2026-01-03 03:15:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:15:53.430221 | orchestrator | 2026-01-03 03:15:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:56.479055 | orchestrator | 2026-01-03 03:15:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:15:56.481145 | orchestrator | 2026-01-03 03:15:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:15:56.481347 | orchestrator | 2026-01-03 03:15:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:15:59.531972 | orchestrator | 2026-01-03 03:15:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:15:59.533926 | orchestrator | 2026-01-03 03:15:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:15:59.534602 | orchestrator | 2026-01-03 03:15:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:02.581700 | orchestrator | 2026-01-03 03:16:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:16:02.581803 | orchestrator | 2026-01-03 03:16:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:16:02.581818 | orchestrator | 2026-01-03 03:16:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:05.626867 | orchestrator | 2026-01-03 03:16:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:16:05.628669 | orchestrator | 2026-01-03 03:16:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:16:05.628758 | orchestrator | 2026-01-03 03:16:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:08.674151 | orchestrator | 2026-01-03 03:16:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:16:08.675134 | orchestrator | 2026-01-03 03:16:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:16:08.675288 | orchestrator | 2026-01-03 03:16:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:11.717670 | orchestrator | 2026-01-03 03:16:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:16:11.719755 | orchestrator | 2026-01-03 03:16:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:16:11.719844 | orchestrator | 2026-01-03 03:16:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:14.759854 | orchestrator | 2026-01-03 03:16:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:16:14.761590 | orchestrator | 2026-01-03 03:16:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:16:14.761669 | orchestrator | 2026-01-03 03:16:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:17.813937 | orchestrator | 2026-01-03 03:16:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:16:17.815331 | orchestrator | 2026-01-03 03:16:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:16:17.815414 | orchestrator | 2026-01-03 03:16:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:20.865374 | orchestrator | 2026-01-03 03:16:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:16:20.867753 | orchestrator | 2026-01-03 03:16:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:16:20.867828 | orchestrator | 2026-01-03 03:16:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:23.923712 | orchestrator | 2026-01-03 03:16:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:16:23.924559 | orchestrator | 2026-01-03 03:16:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:16:23.924858 | orchestrator | 2026-01-03 03:16:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:26.980730 | orchestrator | 2026-01-03 03:16:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:16:26.982563 | orchestrator | 2026-01-03 03:16:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:16:26.982668 | orchestrator | 2026-01-03 03:16:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:30.042638 | orchestrator | 2026-01-03 03:16:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:16:30.043711 | orchestrator | 2026-01-03 03:16:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:16:30.043761 | orchestrator | 2026-01-03 03:16:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:33.093695 | orchestrator | 2026-01-03 03:16:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:16:33.094865 | orchestrator | 2026-01-03 03:16:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:16:33.094948 | orchestrator | 2026-01-03 03:16:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:36.140511 | orchestrator | 2026-01-03 03:16:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:16:36.143640 | orchestrator | 2026-01-03 03:16:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:16:36.143747 | orchestrator | 2026-01-03 03:16:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:39.187439 | orchestrator | 2026-01-03 03:16:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:16:39.188848 | orchestrator | 2026-01-03 03:16:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:16:39.188929 | orchestrator | 2026-01-03 03:16:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:42.236664 | orchestrator | 2026-01-03 03:16:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:16:42.237520 | orchestrator | 2026-01-03 03:16:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:16:42.237552 | orchestrator | 2026-01-03 03:16:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:45.279171 | orchestrator | 2026-01-03 03:16:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:16:45.279355 | orchestrator | 2026-01-03 03:16:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:16:45.279585 | orchestrator | 2026-01-03 03:16:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:48.327414 | orchestrator | 2026-01-03 03:16:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:16:48.329121 | orchestrator | 2026-01-03 03:16:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:16:48.329831 | orchestrator | 2026-01-03 03:16:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:51.373613 | orchestrator | 2026-01-03 03:16:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:16:51.374985 | orchestrator | 2026-01-03 03:16:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:16:51.375045 | orchestrator | 2026-01-03 03:16:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:54.414634 | orchestrator | 2026-01-03 03:16:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:16:54.414963 | orchestrator | 2026-01-03 03:16:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:16:54.415019 | orchestrator | 2026-01-03 03:16:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:16:57.462647 | orchestrator | 2026-01-03 03:16:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:16:57.464500 | orchestrator | 2026-01-03 03:16:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:16:57.464578 | orchestrator | 2026-01-03 03:16:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:00.514495 | orchestrator | 2026-01-03 03:17:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:17:00.515673 | orchestrator | 2026-01-03 03:17:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:17:00.515716 | orchestrator | 2026-01-03 03:17:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:03.565073 | orchestrator | 2026-01-03 03:17:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:17:03.567411 | orchestrator | 2026-01-03 03:17:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:17:03.567480 | orchestrator | 2026-01-03 03:17:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:06.612279 | orchestrator | 2026-01-03 03:17:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:17:06.613198 | orchestrator | 2026-01-03 03:17:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:17:06.613660 | orchestrator | 2026-01-03 03:17:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:09.659990 | orchestrator | 2026-01-03 03:17:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:17:09.662220 | orchestrator | 2026-01-03 03:17:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:17:09.662348 | orchestrator | 2026-01-03 03:17:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:12.710290 | orchestrator | 2026-01-03 03:17:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:17:12.712216 | orchestrator | 2026-01-03 03:17:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:17:12.712266 | orchestrator | 2026-01-03 03:17:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:15.753684 | orchestrator | 2026-01-03 03:17:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:17:15.754428 | orchestrator | 2026-01-03 03:17:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:17:15.754476 | orchestrator | 2026-01-03 03:17:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:18.800419 | orchestrator | 2026-01-03 03:17:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:17:18.801120 | orchestrator | 2026-01-03 03:17:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:17:18.801164 | orchestrator | 2026-01-03 03:17:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:21.847171 | orchestrator | 2026-01-03 03:17:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:17:21.848071 | orchestrator | 2026-01-03 03:17:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:17:21.848119 | orchestrator | 2026-01-03 03:17:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:24.896911 | orchestrator | 2026-01-03 03:17:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:17:24.897882 | orchestrator | 2026-01-03 03:17:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:17:24.897986 | orchestrator | 2026-01-03 03:17:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:27.957970 | orchestrator | 2026-01-03 03:17:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:17:27.959620 | orchestrator | 2026-01-03 03:17:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:17:27.959664 | orchestrator | 2026-01-03 03:17:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:31.008397 | orchestrator | 2026-01-03 03:17:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:17:31.010440 | orchestrator | 2026-01-03 03:17:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:17:31.010535 | orchestrator | 2026-01-03 03:17:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:34.059353 | orchestrator | 2026-01-03 03:17:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:17:34.060925 | orchestrator | 2026-01-03 03:17:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:17:34.060974 | orchestrator | 2026-01-03 03:17:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:37.111854 | orchestrator | 2026-01-03 03:17:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:17:37.113896 | orchestrator | 2026-01-03 03:17:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:17:37.114107 | orchestrator | 2026-01-03 03:17:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:40.149861 | orchestrator | 2026-01-03 03:17:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:17:40.150657 | orchestrator | 2026-01-03 03:17:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:17:40.150714 | orchestrator | 2026-01-03 03:17:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:43.195909 | orchestrator | 2026-01-03 03:17:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:17:43.196789 | orchestrator | 2026-01-03 03:17:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:17:43.196825 | orchestrator | 2026-01-03 03:17:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:46.242627 | orchestrator | 2026-01-03 03:17:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:17:46.243422 | orchestrator | 2026-01-03 03:17:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:17:46.243571 | orchestrator | 2026-01-03 03:17:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:49.293180 | orchestrator | 2026-01-03 03:17:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:17:49.296825 | orchestrator | 2026-01-03 03:17:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:17:49.296890 | orchestrator | 2026-01-03 03:17:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:52.344155 | orchestrator | 2026-01-03 03:17:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:17:52.345303 | orchestrator | 2026-01-03 03:17:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:17:52.345366 | orchestrator | 2026-01-03 03:17:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:55.389641 | orchestrator | 2026-01-03 03:17:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:17:55.391074 | orchestrator | 2026-01-03 03:17:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:17:55.391151 | orchestrator | 2026-01-03 03:17:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:17:58.445727 | orchestrator | 2026-01-03 03:17:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:17:58.445825 | orchestrator | 2026-01-03 03:17:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:17:58.445840 | orchestrator | 2026-01-03 03:17:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:01.490305 | orchestrator | 2026-01-03 03:18:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:18:01.494605 | orchestrator | 2026-01-03 03:18:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:18:01.494694 | orchestrator | 2026-01-03 03:18:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:04.532114 | orchestrator | 2026-01-03 03:18:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:18:04.534622 | orchestrator | 2026-01-03 03:18:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:18:04.534671 | orchestrator | 2026-01-03 03:18:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:07.576562 | orchestrator | 2026-01-03 03:18:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:18:07.578168 | orchestrator | 2026-01-03 03:18:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:18:07.578245 | orchestrator | 2026-01-03 03:18:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:10.623289 | orchestrator | 2026-01-03 03:18:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:18:10.626889 | orchestrator | 2026-01-03 03:18:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:18:10.626993 | orchestrator | 2026-01-03 03:18:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:13.673676 | orchestrator | 2026-01-03 03:18:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:18:13.674740 | orchestrator | 2026-01-03 03:18:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:18:13.674812 | orchestrator | 2026-01-03 03:18:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:16.720553 | orchestrator | 2026-01-03 03:18:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:18:16.721320 | orchestrator | 2026-01-03 03:18:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:18:16.721357 | orchestrator | 2026-01-03 03:18:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:19.765278 | orchestrator | 2026-01-03 03:18:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:18:19.765893 | orchestrator | 2026-01-03 03:18:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:18:19.766261 | orchestrator | 2026-01-03 03:18:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:22.812911 | orchestrator | 2026-01-03 03:18:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:18:22.814315 | orchestrator | 2026-01-03 03:18:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:18:22.814369 | orchestrator | 2026-01-03 03:18:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:25.862086 | orchestrator | 2026-01-03 03:18:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:18:25.863205 | orchestrator | 2026-01-03 03:18:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:18:25.863244 | orchestrator | 2026-01-03 03:18:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:28.908209 | orchestrator | 2026-01-03 03:18:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:18:28.911033 | orchestrator | 2026-01-03 03:18:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:18:28.911116 | orchestrator | 2026-01-03 03:18:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:31.955187 | orchestrator | 2026-01-03 03:18:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:18:31.958085 | orchestrator | 2026-01-03 03:18:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:18:31.958197 | orchestrator | 2026-01-03 03:18:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:35.003446 | orchestrator | 2026-01-03 03:18:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:18:35.005747 | orchestrator | 2026-01-03 03:18:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:18:35.005802 | orchestrator | 2026-01-03 03:18:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:38.051709 | orchestrator | 2026-01-03 03:18:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:18:38.054998 | orchestrator | 2026-01-03 03:18:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:18:38.055078 | orchestrator | 2026-01-03 03:18:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:41.095480 | orchestrator | 2026-01-03 03:18:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:18:41.098250 | orchestrator | 2026-01-03 03:18:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:18:41.098325 | orchestrator | 2026-01-03 03:18:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:44.143945 | orchestrator | 2026-01-03 03:18:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:18:44.144822 | orchestrator | 2026-01-03 03:18:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:18:44.144866 | orchestrator | 2026-01-03 03:18:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:47.194282 | orchestrator | 2026-01-03 03:18:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:18:47.196194 | orchestrator | 2026-01-03 03:18:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:18:47.196258 | orchestrator | 2026-01-03 03:18:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:50.245180 | orchestrator | 2026-01-03 03:18:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:18:50.246144 | orchestrator | 2026-01-03 03:18:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:18:50.246228 | orchestrator | 2026-01-03 03:18:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:53.290314 | orchestrator | 2026-01-03 03:18:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:18:53.293115 | orchestrator | 2026-01-03 03:18:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:18:53.293202 | orchestrator | 2026-01-03 03:18:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:56.331826 | orchestrator | 2026-01-03 03:18:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:18:56.332938 | orchestrator | 2026-01-03 03:18:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:18:56.333128 | orchestrator | 2026-01-03 03:18:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:18:59.387503 | orchestrator | 2026-01-03 03:18:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:18:59.388483 | orchestrator | 2026-01-03 03:18:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:18:59.388554 | orchestrator | 2026-01-03 03:18:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:02.432435 | orchestrator | 2026-01-03 03:19:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:19:02.434436 | orchestrator | 2026-01-03 03:19:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:19:02.434558 | orchestrator | 2026-01-03 03:19:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:05.485120 | orchestrator | 2026-01-03 03:19:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:19:05.488157 | orchestrator | 2026-01-03 03:19:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:19:05.488218 | orchestrator | 2026-01-03 03:19:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:08.539775 | orchestrator | 2026-01-03 03:19:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:19:08.543052 | orchestrator | 2026-01-03 03:19:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:19:08.543165 | orchestrator | 2026-01-03 03:19:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:11.592138 | orchestrator | 2026-01-03 03:19:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:19:11.592842 | orchestrator | 2026-01-03 03:19:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:19:11.592868 | orchestrator | 2026-01-03 03:19:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:14.641622 | orchestrator | 2026-01-03 03:19:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:19:14.643267 | orchestrator | 2026-01-03 03:19:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:19:14.643700 | orchestrator | 2026-01-03 03:19:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:17.688962 | orchestrator | 2026-01-03 03:19:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:19:17.689063 | orchestrator | 2026-01-03 03:19:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:19:17.689072 | orchestrator | 2026-01-03 03:19:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:20.736062 | orchestrator | 2026-01-03 03:19:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:19:20.737956 | orchestrator | 2026-01-03 03:19:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:19:20.738283 | orchestrator | 2026-01-03 03:19:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:23.786841 | orchestrator | 2026-01-03 03:19:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:19:23.789000 | orchestrator | 2026-01-03 03:19:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:19:23.789108 | orchestrator | 2026-01-03 03:19:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:26.839455 | orchestrator | 2026-01-03 03:19:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:19:26.841575 | orchestrator | 2026-01-03 03:19:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:19:26.841679 | orchestrator | 2026-01-03 03:19:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:29.900486 | orchestrator | 2026-01-03 03:19:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:19:29.901969 | orchestrator | 2026-01-03 03:19:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:19:29.902006 | orchestrator | 2026-01-03 03:19:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:32.952317 | orchestrator | 2026-01-03 03:19:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:19:32.953089 | orchestrator | 2026-01-03 03:19:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:19:32.953116 | orchestrator | 2026-01-03 03:19:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:36.002807 | orchestrator | 2026-01-03 03:19:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:19:36.002969 | orchestrator | 2026-01-03 03:19:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:19:36.002983 | orchestrator | 2026-01-03 03:19:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:39.051181 | orchestrator | 2026-01-03 03:19:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:19:39.051296 | orchestrator | 2026-01-03 03:19:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:19:39.051305 | orchestrator | 2026-01-03 03:19:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:42.095979 | orchestrator | 2026-01-03 03:19:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:19:42.096084 | orchestrator | 2026-01-03 03:19:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:19:42.096093 | orchestrator | 2026-01-03 03:19:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:45.147218 | orchestrator | 2026-01-03 03:19:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:19:45.147300 | orchestrator | 2026-01-03 03:19:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:19:45.147307 | orchestrator | 2026-01-03 03:19:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:48.195709 | orchestrator | 2026-01-03 03:19:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:19:48.197479 | orchestrator | 2026-01-03 03:19:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:19:48.197528 | orchestrator | 2026-01-03 03:19:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:51.251531 | orchestrator | 2026-01-03 03:19:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:19:51.253064 | orchestrator | 2026-01-03 03:19:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:19:51.253111 | orchestrator | 2026-01-03 03:19:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:54.299095 | orchestrator | 2026-01-03 03:19:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:19:54.299525 | orchestrator | 2026-01-03 03:19:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:19:54.299560 | orchestrator | 2026-01-03 03:19:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:19:57.343303 | orchestrator | 2026-01-03 03:19:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:19:57.344597 | orchestrator | 2026-01-03 03:19:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:19:57.344628 | orchestrator | 2026-01-03 03:19:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:00.395170 | orchestrator | 2026-01-03 03:20:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:20:00.396966 | orchestrator | 2026-01-03 03:20:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:20:00.397029 | orchestrator | 2026-01-03 03:20:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:03.442366 | orchestrator | 2026-01-03 03:20:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:20:03.443351 | orchestrator | 2026-01-03 03:20:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:20:03.443426 | orchestrator | 2026-01-03 03:20:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:06.486978 | orchestrator | 2026-01-03 03:20:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:20:06.488059 | orchestrator | 2026-01-03 03:20:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:20:06.488118 | orchestrator | 2026-01-03 03:20:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:09.537161 | orchestrator | 2026-01-03 03:20:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:20:09.540669 | orchestrator | 2026-01-03 03:20:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:20:09.540812 | orchestrator | 2026-01-03 03:20:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:12.595591 | orchestrator | 2026-01-03 03:20:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:20:12.596790 | orchestrator | 2026-01-03 03:20:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:20:12.596838 | orchestrator | 2026-01-03 03:20:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:15.648651 | orchestrator | 2026-01-03 03:20:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:20:15.652211 | orchestrator | 2026-01-03 03:20:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:20:15.652355 | orchestrator | 2026-01-03 03:20:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:18.693207 | orchestrator | 2026-01-03 03:20:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:20:18.694768 | orchestrator | 2026-01-03 03:20:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:20:18.694848 | orchestrator | 2026-01-03 03:20:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:21.744374 | orchestrator | 2026-01-03 03:20:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:20:21.745620 | orchestrator | 2026-01-03 03:20:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:20:21.745996 | orchestrator | 2026-01-03 03:20:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:24.789159 | orchestrator | 2026-01-03 03:20:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:20:24.791456 | orchestrator | 2026-01-03 03:20:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:20:24.791510 | orchestrator | 2026-01-03 03:20:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:27.835625 | orchestrator | 2026-01-03 03:20:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:20:27.836416 | orchestrator | 2026-01-03 03:20:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:20:27.836470 | orchestrator | 2026-01-03 03:20:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:30.882839 | orchestrator | 2026-01-03 03:20:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:20:30.884016 | orchestrator | 2026-01-03 03:20:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:20:30.884070 | orchestrator | 2026-01-03 03:20:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:33.929053 | orchestrator | 2026-01-03 03:20:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:20:33.930356 | orchestrator | 2026-01-03 03:20:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:20:33.930435 | orchestrator | 2026-01-03 03:20:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:36.981982 | orchestrator | 2026-01-03 03:20:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:20:36.983291 | orchestrator | 2026-01-03 03:20:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:20:36.983501 | orchestrator | 2026-01-03 03:20:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:40.045454 | orchestrator | 2026-01-03 03:20:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:20:40.045542 | orchestrator | 2026-01-03 03:20:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:20:40.045554 | orchestrator | 2026-01-03 03:20:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:43.089115 | orchestrator | 2026-01-03 03:20:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:20:43.090972 | orchestrator | 2026-01-03 03:20:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:20:43.091089 | orchestrator | 2026-01-03 03:20:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:46.138673 | orchestrator | 2026-01-03 03:20:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:20:46.139957 | orchestrator | 2026-01-03 03:20:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:20:46.140025 | orchestrator | 2026-01-03 03:20:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:49.189875 | orchestrator | 2026-01-03 03:20:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:20:49.192192 | orchestrator | 2026-01-03 03:20:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:20:49.192277 | orchestrator | 2026-01-03 03:20:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:52.242289 | orchestrator | 2026-01-03 03:20:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:20:52.243202 | orchestrator | 2026-01-03 03:20:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:20:52.243239 | orchestrator | 2026-01-03 03:20:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:55.286389 | orchestrator | 2026-01-03 03:20:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:20:55.288129 | orchestrator | 2026-01-03 03:20:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:20:55.288169 | orchestrator | 2026-01-03 03:20:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:20:58.339861 | orchestrator | 2026-01-03 03:20:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:20:58.342216 | orchestrator | 2026-01-03 03:20:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:20:58.342291 | orchestrator | 2026-01-03 03:20:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:01.392463 | orchestrator | 2026-01-03 03:21:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:21:01.394870 | orchestrator | 2026-01-03 03:21:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:21:01.395006 | orchestrator | 2026-01-03 03:21:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:04.444286 | orchestrator | 2026-01-03 03:21:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:21:04.445991 | orchestrator | 2026-01-03 03:21:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:21:04.446118 | orchestrator | 2026-01-03 03:21:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:07.488954 | orchestrator | 2026-01-03 03:21:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:21:07.490751 | orchestrator | 2026-01-03 03:21:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:21:07.490955 | orchestrator | 2026-01-03 03:21:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:10.533255 | orchestrator | 2026-01-03 03:21:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:21:10.534295 | orchestrator | 2026-01-03 03:21:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:21:10.534348 | orchestrator | 2026-01-03 03:21:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:13.581893 | orchestrator | 2026-01-03 03:21:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:21:13.584587 | orchestrator | 2026-01-03 03:21:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:21:13.584656 | orchestrator | 2026-01-03 03:21:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:16.628122 | orchestrator | 2026-01-03 03:21:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:21:16.630448 | orchestrator | 2026-01-03 03:21:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:21:16.630562 | orchestrator | 2026-01-03 03:21:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:19.684283 | orchestrator | 2026-01-03 03:21:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:21:19.686666 | orchestrator | 2026-01-03 03:21:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:21:19.686732 | orchestrator | 2026-01-03 03:21:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:22.733745 | orchestrator | 2026-01-03 03:21:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:21:22.734772 | orchestrator | 2026-01-03 03:21:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:21:22.734851 | orchestrator | 2026-01-03 03:21:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:25.784404 | orchestrator | 2026-01-03 03:21:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:21:25.787116 | orchestrator | 2026-01-03 03:21:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:21:25.787180 | orchestrator | 2026-01-03 03:21:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:28.836901 | orchestrator | 2026-01-03 03:21:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:21:28.839018 | orchestrator | 2026-01-03 03:21:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:21:28.839074 | orchestrator | 2026-01-03 03:21:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:31.898437 | orchestrator | 2026-01-03 03:21:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:21:31.899761 | orchestrator | 2026-01-03 03:21:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:21:31.899853 | orchestrator | 2026-01-03 03:21:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:34.952566 | orchestrator | 2026-01-03 03:21:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:21:34.954618 | orchestrator | 2026-01-03 03:21:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:21:34.954684 | orchestrator | 2026-01-03 03:21:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:38.005179 | orchestrator | 2026-01-03 03:21:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:21:38.007768 | orchestrator | 2026-01-03 03:21:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:21:38.007876 | orchestrator | 2026-01-03 03:21:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:41.057774 | orchestrator | 2026-01-03 03:21:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:21:41.060732 | orchestrator | 2026-01-03 03:21:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:21:41.060943 | orchestrator | 2026-01-03 03:21:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:44.112970 | orchestrator | 2026-01-03 03:21:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:21:44.114492 | orchestrator | 2026-01-03 03:21:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:21:44.114525 | orchestrator | 2026-01-03 03:21:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:47.164748 | orchestrator | 2026-01-03 03:21:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:21:47.167261 | orchestrator | 2026-01-03 03:21:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:21:47.167393 | orchestrator | 2026-01-03 03:21:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:50.219196 | orchestrator | 2026-01-03 03:21:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:21:50.221537 | orchestrator | 2026-01-03 03:21:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:21:50.221606 | orchestrator | 2026-01-03 03:21:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:53.267298 | orchestrator | 2026-01-03 03:21:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:21:53.268484 | orchestrator | 2026-01-03 03:21:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:21:53.268709 | orchestrator | 2026-01-03 03:21:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:56.317440 | orchestrator | 2026-01-03 03:21:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:21:56.318522 | orchestrator | 2026-01-03 03:21:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:21:56.318555 | orchestrator | 2026-01-03 03:21:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:21:59.370168 | orchestrator | 2026-01-03 03:21:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:21:59.370800 | orchestrator | 2026-01-03 03:21:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:21:59.371150 | orchestrator | 2026-01-03 03:21:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:02.423581 | orchestrator | 2026-01-03 03:22:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:22:02.425221 | orchestrator | 2026-01-03 03:22:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:22:02.425265 | orchestrator | 2026-01-03 03:22:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:05.470132 | orchestrator | 2026-01-03 03:22:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:22:05.471761 | orchestrator | 2026-01-03 03:22:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:22:05.471972 | orchestrator | 2026-01-03 03:22:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:08.521452 | orchestrator | 2026-01-03 03:22:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:22:08.522279 | orchestrator | 2026-01-03 03:22:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:22:08.522330 | orchestrator | 2026-01-03 03:22:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:11.562428 | orchestrator | 2026-01-03 03:22:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:22:11.563450 | orchestrator | 2026-01-03 03:22:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:22:11.563509 | orchestrator | 2026-01-03 03:22:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:14.610467 | orchestrator | 2026-01-03 03:22:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:22:14.612398 | orchestrator | 2026-01-03 03:22:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:22:14.612446 | orchestrator | 2026-01-03 03:22:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:17.659926 | orchestrator | 2026-01-03 03:22:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:22:17.661817 | orchestrator | 2026-01-03 03:22:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:22:17.661880 | orchestrator | 2026-01-03 03:22:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:20.709566 | orchestrator | 2026-01-03 03:22:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:22:20.711830 | orchestrator | 2026-01-03 03:22:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:22:20.711891 | orchestrator | 2026-01-03 03:22:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:23.757220 | orchestrator | 2026-01-03 03:22:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:22:23.757756 | orchestrator | 2026-01-03 03:22:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:22:23.757816 | orchestrator | 2026-01-03 03:22:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:26.808305 | orchestrator | 2026-01-03 03:22:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:22:26.810504 | orchestrator | 2026-01-03 03:22:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:22:26.810563 | orchestrator | 2026-01-03 03:22:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:29.861505 | orchestrator | 2026-01-03 03:22:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:22:29.862272 | orchestrator | 2026-01-03 03:22:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:22:29.862329 | orchestrator | 2026-01-03 03:22:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:32.907411 | orchestrator | 2026-01-03 03:22:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:22:32.910080 | orchestrator | 2026-01-03 03:22:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:22:32.910132 | orchestrator | 2026-01-03 03:22:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:35.951645 | orchestrator | 2026-01-03 03:22:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:22:35.952613 | orchestrator | 2026-01-03 03:22:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:22:35.952684 | orchestrator | 2026-01-03 03:22:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:39.012335 | orchestrator | 2026-01-03 03:22:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:22:39.012430 | orchestrator | 2026-01-03 03:22:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:22:39.013553 | orchestrator | 2026-01-03 03:22:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:42.066693 | orchestrator | 2026-01-03 03:22:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:22:42.067645 | orchestrator | 2026-01-03 03:22:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:22:42.067704 | orchestrator | 2026-01-03 03:22:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:45.112639 | orchestrator | 2026-01-03 03:22:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:22:45.114265 | orchestrator | 2026-01-03 03:22:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:22:45.114325 | orchestrator | 2026-01-03 03:22:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:48.166892 | orchestrator | 2026-01-03 03:22:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:22:48.169158 | orchestrator | 2026-01-03 03:22:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:22:48.169261 | orchestrator | 2026-01-03 03:22:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:51.217631 | orchestrator | 2026-01-03 03:22:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:22:51.221140 | orchestrator | 2026-01-03 03:22:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:22:51.222269 | orchestrator | 2026-01-03 03:22:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:54.272863 | orchestrator | 2026-01-03 03:22:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:22:54.274905 | orchestrator | 2026-01-03 03:22:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:22:54.275743 | orchestrator | 2026-01-03 03:22:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:22:57.316199 | orchestrator | 2026-01-03 03:22:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:22:57.317884 | orchestrator | 2026-01-03 03:22:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:22:57.317937 | orchestrator | 2026-01-03 03:22:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:00.359935 | orchestrator | 2026-01-03 03:23:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:23:00.362258 | orchestrator | 2026-01-03 03:23:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:23:00.362339 | orchestrator | 2026-01-03 03:23:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:03.409569 | orchestrator | 2026-01-03 03:23:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:23:03.411137 | orchestrator | 2026-01-03 03:23:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:23:03.411197 | orchestrator | 2026-01-03 03:23:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:06.461796 | orchestrator | 2026-01-03 03:23:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:23:06.463385 | orchestrator | 2026-01-03 03:23:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:23:06.463437 | orchestrator | 2026-01-03 03:23:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:09.513147 | orchestrator | 2026-01-03 03:23:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:23:09.514749 | orchestrator | 2026-01-03 03:23:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:23:09.514897 | orchestrator | 2026-01-03 03:23:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:12.566486 | orchestrator | 2026-01-03 03:23:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:23:12.567655 | orchestrator | 2026-01-03 03:23:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:23:12.567751 | orchestrator | 2026-01-03 03:23:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:15.613815 | orchestrator | 2026-01-03 03:23:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:23:15.614626 | orchestrator | 2026-01-03 03:23:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:23:15.614673 | orchestrator | 2026-01-03 03:23:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:18.666595 | orchestrator | 2026-01-03 03:23:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:23:18.667767 | orchestrator | 2026-01-03 03:23:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:23:18.667805 | orchestrator | 2026-01-03 03:23:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:21.715664 | orchestrator | 2026-01-03 03:23:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:23:21.716688 | orchestrator | 2026-01-03 03:23:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:23:21.716754 | orchestrator | 2026-01-03 03:23:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:24.763261 | orchestrator | 2026-01-03 03:23:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:23:24.764818 | orchestrator | 2026-01-03 03:23:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:23:24.764852 | orchestrator | 2026-01-03 03:23:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:27.823165 | orchestrator | 2026-01-03 03:23:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:23:27.823263 | orchestrator | 2026-01-03 03:23:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:23:27.823271 | orchestrator | 2026-01-03 03:23:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:30.857614 | orchestrator | 2026-01-03 03:23:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:23:30.859518 | orchestrator | 2026-01-03 03:23:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:23:30.859641 | orchestrator | 2026-01-03 03:23:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:33.903422 | orchestrator | 2026-01-03 03:23:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:23:33.904425 | orchestrator | 2026-01-03 03:23:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:23:33.904514 | orchestrator | 2026-01-03 03:23:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:36.948629 | orchestrator | 2026-01-03 03:23:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:23:36.949535 | orchestrator | 2026-01-03 03:23:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:23:36.949569 | orchestrator | 2026-01-03 03:23:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:39.999413 | orchestrator | 2026-01-03 03:23:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:23:40.000736 | orchestrator | 2026-01-03 03:23:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:23:40.000791 | orchestrator | 2026-01-03 03:23:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:43.055181 | orchestrator | 2026-01-03 03:23:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:23:43.055847 | orchestrator | 2026-01-03 03:23:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:23:43.055906 | orchestrator | 2026-01-03 03:23:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:46.106972 | orchestrator | 2026-01-03 03:23:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:23:46.108115 | orchestrator | 2026-01-03 03:23:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:23:46.108219 | orchestrator | 2026-01-03 03:23:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:49.156802 | orchestrator | 2026-01-03 03:23:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:23:49.157481 | orchestrator | 2026-01-03 03:23:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:23:49.157502 | orchestrator | 2026-01-03 03:23:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:52.203486 | orchestrator | 2026-01-03 03:23:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:23:52.205123 | orchestrator | 2026-01-03 03:23:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:23:52.205211 | orchestrator | 2026-01-03 03:23:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:55.260005 | orchestrator | 2026-01-03 03:23:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:23:55.260116 | orchestrator | 2026-01-03 03:23:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:23:55.260123 | orchestrator | 2026-01-03 03:23:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:23:58.300128 | orchestrator | 2026-01-03 03:23:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:23:58.300475 | orchestrator | 2026-01-03 03:23:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:23:58.300503 | orchestrator | 2026-01-03 03:23:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:01.349219 | orchestrator | 2026-01-03 03:24:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:24:01.350830 | orchestrator | 2026-01-03 03:24:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:24:01.350915 | orchestrator | 2026-01-03 03:24:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:04.397567 | orchestrator | 2026-01-03 03:24:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:24:04.397727 | orchestrator | 2026-01-03 03:24:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:24:04.398153 | orchestrator | 2026-01-03 03:24:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:07.437930 | orchestrator | 2026-01-03 03:24:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:24:07.439854 | orchestrator | 2026-01-03 03:24:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:24:07.439969 | orchestrator | 2026-01-03 03:24:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:10.481241 | orchestrator | 2026-01-03 03:24:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:24:10.481337 | orchestrator | 2026-01-03 03:24:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:24:10.481344 | orchestrator | 2026-01-03 03:24:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:13.525539 | orchestrator | 2026-01-03 03:24:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:24:13.525742 | orchestrator | 2026-01-03 03:24:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:24:13.525765 | orchestrator | 2026-01-03 03:24:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:16.574140 | orchestrator | 2026-01-03 03:24:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:24:16.574306 | orchestrator | 2026-01-03 03:24:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:24:16.574320 | orchestrator | 2026-01-03 03:24:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:19.619705 | orchestrator | 2026-01-03 03:24:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:24:19.621756 | orchestrator | 2026-01-03 03:24:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:24:19.621864 | orchestrator | 2026-01-03 03:24:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:22.665170 | orchestrator | 2026-01-03 03:24:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:24:22.667177 | orchestrator | 2026-01-03 03:24:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:24:22.667254 | orchestrator | 2026-01-03 03:24:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:25.711867 | orchestrator | 2026-01-03 03:24:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:24:25.714254 | orchestrator | 2026-01-03 03:24:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:24:25.714329 | orchestrator | 2026-01-03 03:24:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:28.762411 | orchestrator | 2026-01-03 03:24:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:24:28.765011 | orchestrator | 2026-01-03 03:24:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:24:28.765136 | orchestrator | 2026-01-03 03:24:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:31.812151 | orchestrator | 2026-01-03 03:24:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:24:31.814655 | orchestrator | 2026-01-03 03:24:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:24:31.814792 | orchestrator | 2026-01-03 03:24:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:34.864454 | orchestrator | 2026-01-03 03:24:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:24:34.867288 | orchestrator | 2026-01-03 03:24:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:24:34.867363 | orchestrator | 2026-01-03 03:24:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:37.918822 | orchestrator | 2026-01-03 03:24:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:24:37.922266 | orchestrator | 2026-01-03 03:24:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:24:37.922341 | orchestrator | 2026-01-03 03:24:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:40.974159 | orchestrator | 2026-01-03 03:24:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:24:40.975993 | orchestrator | 2026-01-03 03:24:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:24:40.976048 | orchestrator | 2026-01-03 03:24:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:44.022203 | orchestrator | 2026-01-03 03:24:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:24:44.024009 | orchestrator | 2026-01-03 03:24:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:24:44.024159 | orchestrator | 2026-01-03 03:24:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:47.073178 | orchestrator | 2026-01-03 03:24:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:24:47.074858 | orchestrator | 2026-01-03 03:24:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:24:47.074943 | orchestrator | 2026-01-03 03:24:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:50.118554 | orchestrator | 2026-01-03 03:24:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:24:50.119462 | orchestrator | 2026-01-03 03:24:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:24:50.119516 | orchestrator | 2026-01-03 03:24:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:53.168477 | orchestrator | 2026-01-03 03:24:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:24:53.168579 | orchestrator | 2026-01-03 03:24:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:24:53.168589 | orchestrator | 2026-01-03 03:24:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:56.220616 | orchestrator | 2026-01-03 03:24:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:24:56.222808 | orchestrator | 2026-01-03 03:24:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:24:56.222900 | orchestrator | 2026-01-03 03:24:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:24:59.269705 | orchestrator | 2026-01-03 03:24:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:24:59.271564 | orchestrator | 2026-01-03 03:24:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:24:59.271628 | orchestrator | 2026-01-03 03:24:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:02.319586 | orchestrator | 2026-01-03 03:25:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:25:02.322238 | orchestrator | 2026-01-03 03:25:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:25:02.322303 | orchestrator | 2026-01-03 03:25:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:05.366904 | orchestrator | 2026-01-03 03:25:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:25:05.368047 | orchestrator | 2026-01-03 03:25:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:25:05.368160 | orchestrator | 2026-01-03 03:25:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:08.418862 | orchestrator | 2026-01-03 03:25:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:25:08.422085 | orchestrator | 2026-01-03 03:25:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:25:08.422255 | orchestrator | 2026-01-03 03:25:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:11.472691 | orchestrator | 2026-01-03 03:25:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:25:11.474830 | orchestrator | 2026-01-03 03:25:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:25:11.474940 | orchestrator | 2026-01-03 03:25:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:14.534657 | orchestrator | 2026-01-03 03:25:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:25:14.536289 | orchestrator | 2026-01-03 03:25:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:25:14.536347 | orchestrator | 2026-01-03 03:25:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:17.588690 | orchestrator | 2026-01-03 03:25:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:25:17.589465 | orchestrator | 2026-01-03 03:25:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:25:17.589553 | orchestrator | 2026-01-03 03:25:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:20.638090 | orchestrator | 2026-01-03 03:25:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:25:20.640462 | orchestrator | 2026-01-03 03:25:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:25:20.640788 | orchestrator | 2026-01-03 03:25:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:23.686700 | orchestrator | 2026-01-03 03:25:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:25:23.690432 | orchestrator | 2026-01-03 03:25:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:25:23.690519 | orchestrator | 2026-01-03 03:25:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:26.738289 | orchestrator | 2026-01-03 03:25:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:25:26.741015 | orchestrator | 2026-01-03 03:25:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:25:26.741086 | orchestrator | 2026-01-03 03:25:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:29.787945 | orchestrator | 2026-01-03 03:25:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:25:29.790352 | orchestrator | 2026-01-03 03:25:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:25:29.790404 | orchestrator | 2026-01-03 03:25:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:32.836580 | orchestrator | 2026-01-03 03:25:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:25:32.839495 | orchestrator | 2026-01-03 03:25:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:25:32.839696 | orchestrator | 2026-01-03 03:25:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:35.885408 | orchestrator | 2026-01-03 03:25:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:25:35.886617 | orchestrator | 2026-01-03 03:25:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:25:35.886732 | orchestrator | 2026-01-03 03:25:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:38.931323 | orchestrator | 2026-01-03 03:25:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:25:38.934868 | orchestrator | 2026-01-03 03:25:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:25:38.934955 | orchestrator | 2026-01-03 03:25:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:41.987647 | orchestrator | 2026-01-03 03:25:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:25:41.989260 | orchestrator | 2026-01-03 03:25:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:25:41.989450 | orchestrator | 2026-01-03 03:25:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:45.044891 | orchestrator | 2026-01-03 03:25:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:25:45.045030 | orchestrator | 2026-01-03 03:25:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:25:45.045055 | orchestrator | 2026-01-03 03:25:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:48.093930 | orchestrator | 2026-01-03 03:25:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:25:48.095280 | orchestrator | 2026-01-03 03:25:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:25:48.095362 | orchestrator | 2026-01-03 03:25:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:51.144299 | orchestrator | 2026-01-03 03:25:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:25:51.145437 | orchestrator | 2026-01-03 03:25:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:25:51.145473 | orchestrator | 2026-01-03 03:25:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:54.189325 | orchestrator | 2026-01-03 03:25:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:25:54.192127 | orchestrator | 2026-01-03 03:25:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:25:54.192222 | orchestrator | 2026-01-03 03:25:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:25:57.250302 | orchestrator | 2026-01-03 03:25:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:25:57.250694 | orchestrator | 2026-01-03 03:25:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:25:57.250727 | orchestrator | 2026-01-03 03:25:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:00.295962 | orchestrator | 2026-01-03 03:26:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:26:00.296254 | orchestrator | 2026-01-03 03:26:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:26:00.296510 | orchestrator | 2026-01-03 03:26:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:03.344758 | orchestrator | 2026-01-03 03:26:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:26:03.346273 | orchestrator | 2026-01-03 03:26:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:26:03.346318 | orchestrator | 2026-01-03 03:26:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:06.390841 | orchestrator | 2026-01-03 03:26:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:26:06.392420 | orchestrator | 2026-01-03 03:26:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:26:06.392449 | orchestrator | 2026-01-03 03:26:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:09.430644 | orchestrator | 2026-01-03 03:26:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:26:09.432608 | orchestrator | 2026-01-03 03:26:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:26:09.432672 | orchestrator | 2026-01-03 03:26:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:12.470639 | orchestrator | 2026-01-03 03:26:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:26:12.472967 | orchestrator | 2026-01-03 03:26:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:26:12.473233 | orchestrator | 2026-01-03 03:26:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:15.525786 | orchestrator | 2026-01-03 03:26:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:26:15.527595 | orchestrator | 2026-01-03 03:26:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:26:15.527748 | orchestrator | 2026-01-03 03:26:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:18.577231 | orchestrator | 2026-01-03 03:26:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:26:18.578818 | orchestrator | 2026-01-03 03:26:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:26:18.578871 | orchestrator | 2026-01-03 03:26:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:21.627474 | orchestrator | 2026-01-03 03:26:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:26:21.629155 | orchestrator | 2026-01-03 03:26:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:26:21.629328 | orchestrator | 2026-01-03 03:26:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:24.674951 | orchestrator | 2026-01-03 03:26:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:26:24.676873 | orchestrator | 2026-01-03 03:26:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:26:24.676981 | orchestrator | 2026-01-03 03:26:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:27.723397 | orchestrator | 2026-01-03 03:26:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:26:27.725165 | orchestrator | 2026-01-03 03:26:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:26:27.725223 | orchestrator | 2026-01-03 03:26:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:30.775328 | orchestrator | 2026-01-03 03:26:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:26:30.777217 | orchestrator | 2026-01-03 03:26:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:26:30.777306 | orchestrator | 2026-01-03 03:26:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:33.823261 | orchestrator | 2026-01-03 03:26:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:26:34.089823 | orchestrator | 2026-01-03 03:26:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:26:34.089905 | orchestrator | 2026-01-03 03:26:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:36.872104 | orchestrator | 2026-01-03 03:26:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:26:36.874429 | orchestrator | 2026-01-03 03:26:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:26:36.874515 | orchestrator | 2026-01-03 03:26:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:39.924909 | orchestrator | 2026-01-03 03:26:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:26:39.926650 | orchestrator | 2026-01-03 03:26:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:26:39.926681 | orchestrator | 2026-01-03 03:26:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:42.974989 | orchestrator | 2026-01-03 03:26:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:26:42.976389 | orchestrator | 2026-01-03 03:26:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:26:42.976444 | orchestrator | 2026-01-03 03:26:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:46.030006 | orchestrator | 2026-01-03 03:26:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:26:46.030249 | orchestrator | 2026-01-03 03:26:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:26:46.030280 | orchestrator | 2026-01-03 03:26:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:49.071014 | orchestrator | 2026-01-03 03:26:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:26:49.076018 | orchestrator | 2026-01-03 03:26:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:26:49.076131 | orchestrator | 2026-01-03 03:26:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:52.120276 | orchestrator | 2026-01-03 03:26:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:26:52.123586 | orchestrator | 2026-01-03 03:26:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:26:52.123663 | orchestrator | 2026-01-03 03:26:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:55.165069 | orchestrator | 2026-01-03 03:26:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:26:55.167729 | orchestrator | 2026-01-03 03:26:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:26:55.167897 | orchestrator | 2026-01-03 03:26:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:26:58.218650 | orchestrator | 2026-01-03 03:26:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:26:58.220168 | orchestrator | 2026-01-03 03:26:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:26:58.220313 | orchestrator | 2026-01-03 03:26:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:01.263645 | orchestrator | 2026-01-03 03:27:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:27:01.265174 | orchestrator | 2026-01-03 03:27:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:27:01.265389 | orchestrator | 2026-01-03 03:27:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:04.310089 | orchestrator | 2026-01-03 03:27:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:27:04.311726 | orchestrator | 2026-01-03 03:27:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:27:04.311849 | orchestrator | 2026-01-03 03:27:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:07.354336 | orchestrator | 2026-01-03 03:27:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:27:07.356565 | orchestrator | 2026-01-03 03:27:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:27:07.356648 | orchestrator | 2026-01-03 03:27:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:10.401948 | orchestrator | 2026-01-03 03:27:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:27:10.403969 | orchestrator | 2026-01-03 03:27:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:27:10.404155 | orchestrator | 2026-01-03 03:27:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:13.454610 | orchestrator | 2026-01-03 03:27:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:27:13.456167 | orchestrator | 2026-01-03 03:27:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:27:13.456291 | orchestrator | 2026-01-03 03:27:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:16.511659 | orchestrator | 2026-01-03 03:27:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:27:16.513807 | orchestrator | 2026-01-03 03:27:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:27:16.513886 | orchestrator | 2026-01-03 03:27:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:19.563209 | orchestrator | 2026-01-03 03:27:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:27:19.565891 | orchestrator | 2026-01-03 03:27:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:27:19.565929 | orchestrator | 2026-01-03 03:27:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:22.618880 | orchestrator | 2026-01-03 03:27:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:27:22.620911 | orchestrator | 2026-01-03 03:27:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:27:22.620966 | orchestrator | 2026-01-03 03:27:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:25.672320 | orchestrator | 2026-01-03 03:27:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:27:25.676190 | orchestrator | 2026-01-03 03:27:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:27:25.676302 | orchestrator | 2026-01-03 03:27:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:28.720699 | orchestrator | 2026-01-03 03:27:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:27:28.722770 | orchestrator | 2026-01-03 03:27:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:27:28.722801 | orchestrator | 2026-01-03 03:27:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:31.773080 | orchestrator | 2026-01-03 03:27:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:27:31.774072 | orchestrator | 2026-01-03 03:27:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:27:31.774489 | orchestrator | 2026-01-03 03:27:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:34.809499 | orchestrator | 2026-01-03 03:27:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:27:34.812210 | orchestrator | 2026-01-03 03:27:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:27:34.812312 | orchestrator | 2026-01-03 03:27:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:37.862214 | orchestrator | 2026-01-03 03:27:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:27:37.864293 | orchestrator | 2026-01-03 03:27:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:27:37.864367 | orchestrator | 2026-01-03 03:27:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:40.905882 | orchestrator | 2026-01-03 03:27:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:27:40.907728 | orchestrator | 2026-01-03 03:27:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:27:40.907776 | orchestrator | 2026-01-03 03:27:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:43.956339 | orchestrator | 2026-01-03 03:27:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:27:43.958999 | orchestrator | 2026-01-03 03:27:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:27:43.959116 | orchestrator | 2026-01-03 03:27:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:47.010876 | orchestrator | 2026-01-03 03:27:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:27:47.012785 | orchestrator | 2026-01-03 03:27:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:27:47.012847 | orchestrator | 2026-01-03 03:27:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:50.053748 | orchestrator | 2026-01-03 03:27:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:27:50.056501 | orchestrator | 2026-01-03 03:27:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:27:50.056613 | orchestrator | 2026-01-03 03:27:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:53.103679 | orchestrator | 2026-01-03 03:27:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:27:53.103902 | orchestrator | 2026-01-03 03:27:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:27:53.103921 | orchestrator | 2026-01-03 03:27:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:56.146526 | orchestrator | 2026-01-03 03:27:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:27:56.148454 | orchestrator | 2026-01-03 03:27:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:27:56.148594 | orchestrator | 2026-01-03 03:27:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:27:59.196400 | orchestrator | 2026-01-03 03:27:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:27:59.198542 | orchestrator | 2026-01-03 03:27:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:27:59.199057 | orchestrator | 2026-01-03 03:27:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:02.245522 | orchestrator | 2026-01-03 03:28:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:28:02.248222 | orchestrator | 2026-01-03 03:28:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:28:02.248334 | orchestrator | 2026-01-03 03:28:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:05.289811 | orchestrator | 2026-01-03 03:28:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:28:05.291667 | orchestrator | 2026-01-03 03:28:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:28:05.291713 | orchestrator | 2026-01-03 03:28:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:08.336859 | orchestrator | 2026-01-03 03:28:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:28:08.338814 | orchestrator | 2026-01-03 03:28:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:28:08.338941 | orchestrator | 2026-01-03 03:28:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:11.385917 | orchestrator | 2026-01-03 03:28:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:28:11.387810 | orchestrator | 2026-01-03 03:28:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:28:11.388191 | orchestrator | 2026-01-03 03:28:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:14.431006 | orchestrator | 2026-01-03 03:28:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:28:14.432064 | orchestrator | 2026-01-03 03:28:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:28:14.432404 | orchestrator | 2026-01-03 03:28:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:17.477941 | orchestrator | 2026-01-03 03:28:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:28:17.480044 | orchestrator | 2026-01-03 03:28:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:28:17.480328 | orchestrator | 2026-01-03 03:28:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:20.529486 | orchestrator | 2026-01-03 03:28:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:28:20.530718 | orchestrator | 2026-01-03 03:28:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:28:20.530893 | orchestrator | 2026-01-03 03:28:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:23.584131 | orchestrator | 2026-01-03 03:28:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:28:23.585803 | orchestrator | 2026-01-03 03:28:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:28:23.585847 | orchestrator | 2026-01-03 03:28:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:26.634191 | orchestrator | 2026-01-03 03:28:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:28:26.637136 | orchestrator | 2026-01-03 03:28:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:28:26.637215 | orchestrator | 2026-01-03 03:28:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:29.689230 | orchestrator | 2026-01-03 03:28:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:28:29.690558 | orchestrator | 2026-01-03 03:28:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:28:29.690633 | orchestrator | 2026-01-03 03:28:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:32.732904 | orchestrator | 2026-01-03 03:28:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:28:32.734847 | orchestrator | 2026-01-03 03:28:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:28:32.734911 | orchestrator | 2026-01-03 03:28:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:35.777353 | orchestrator | 2026-01-03 03:28:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:28:35.779041 | orchestrator | 2026-01-03 03:28:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:28:35.779084 | orchestrator | 2026-01-03 03:28:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:38.828840 | orchestrator | 2026-01-03 03:28:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:28:38.830261 | orchestrator | 2026-01-03 03:28:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:28:38.830389 | orchestrator | 2026-01-03 03:28:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:41.876543 | orchestrator | 2026-01-03 03:28:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:28:41.877889 | orchestrator | 2026-01-03 03:28:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:28:41.877944 | orchestrator | 2026-01-03 03:28:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:44.928290 | orchestrator | 2026-01-03 03:28:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:28:44.930095 | orchestrator | 2026-01-03 03:28:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:28:44.930169 | orchestrator | 2026-01-03 03:28:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:47.982098 | orchestrator | 2026-01-03 03:28:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:28:47.983800 | orchestrator | 2026-01-03 03:28:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:28:47.983897 | orchestrator | 2026-01-03 03:28:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:51.032396 | orchestrator | 2026-01-03 03:28:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:28:51.033802 | orchestrator | 2026-01-03 03:28:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:28:51.034108 | orchestrator | 2026-01-03 03:28:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:54.083822 | orchestrator | 2026-01-03 03:28:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:28:54.085604 | orchestrator | 2026-01-03 03:28:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:28:54.085711 | orchestrator | 2026-01-03 03:28:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:28:57.132237 | orchestrator | 2026-01-03 03:28:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:28:57.134367 | orchestrator | 2026-01-03 03:28:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:28:57.134441 | orchestrator | 2026-01-03 03:28:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:00.176636 | orchestrator | 2026-01-03 03:29:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:29:00.177684 | orchestrator | 2026-01-03 03:29:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:29:00.177731 | orchestrator | 2026-01-03 03:29:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:03.225851 | orchestrator | 2026-01-03 03:29:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:29:03.227659 | orchestrator | 2026-01-03 03:29:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:29:03.227715 | orchestrator | 2026-01-03 03:29:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:06.266374 | orchestrator | 2026-01-03 03:29:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:29:06.268250 | orchestrator | 2026-01-03 03:29:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:29:06.268298 | orchestrator | 2026-01-03 03:29:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:09.302403 | orchestrator | 2026-01-03 03:29:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:29:09.303251 | orchestrator | 2026-01-03 03:29:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:29:09.303430 | orchestrator | 2026-01-03 03:29:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:12.348614 | orchestrator | 2026-01-03 03:29:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:29:12.349976 | orchestrator | 2026-01-03 03:29:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:29:12.350008 | orchestrator | 2026-01-03 03:29:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:15.389635 | orchestrator | 2026-01-03 03:29:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:29:15.390526 | orchestrator | 2026-01-03 03:29:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:29:15.390556 | orchestrator | 2026-01-03 03:29:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:18.440168 | orchestrator | 2026-01-03 03:29:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:29:18.441630 | orchestrator | 2026-01-03 03:29:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:29:18.441688 | orchestrator | 2026-01-03 03:29:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:21.488901 | orchestrator | 2026-01-03 03:29:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:29:21.490473 | orchestrator | 2026-01-03 03:29:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:29:21.490563 | orchestrator | 2026-01-03 03:29:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:24.537848 | orchestrator | 2026-01-03 03:29:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:29:24.540112 | orchestrator | 2026-01-03 03:29:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:29:24.540155 | orchestrator | 2026-01-03 03:29:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:27.587467 | orchestrator | 2026-01-03 03:29:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:29:27.589292 | orchestrator | 2026-01-03 03:29:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:29:27.589378 | orchestrator | 2026-01-03 03:29:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:30.636129 | orchestrator | 2026-01-03 03:29:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:29:30.637372 | orchestrator | 2026-01-03 03:29:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:29:30.638165 | orchestrator | 2026-01-03 03:29:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:33.692291 | orchestrator | 2026-01-03 03:29:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:29:33.694483 | orchestrator | 2026-01-03 03:29:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:29:33.694524 | orchestrator | 2026-01-03 03:29:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:36.739424 | orchestrator | 2026-01-03 03:29:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:29:36.740821 | orchestrator | 2026-01-03 03:29:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:29:36.740844 | orchestrator | 2026-01-03 03:29:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:39.789042 | orchestrator | 2026-01-03 03:29:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:29:39.790836 | orchestrator | 2026-01-03 03:29:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:29:39.790932 | orchestrator | 2026-01-03 03:29:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:42.839634 | orchestrator | 2026-01-03 03:29:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:29:42.841684 | orchestrator | 2026-01-03 03:29:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:29:42.841826 | orchestrator | 2026-01-03 03:29:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:45.885583 | orchestrator | 2026-01-03 03:29:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:29:45.887537 | orchestrator | 2026-01-03 03:29:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:29:45.887595 | orchestrator | 2026-01-03 03:29:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:48.932153 | orchestrator | 2026-01-03 03:29:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:29:48.932730 | orchestrator | 2026-01-03 03:29:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:29:48.932760 | orchestrator | 2026-01-03 03:29:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:51.983787 | orchestrator | 2026-01-03 03:29:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:29:51.985710 | orchestrator | 2026-01-03 03:29:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:29:51.985785 | orchestrator | 2026-01-03 03:29:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:55.029858 | orchestrator | 2026-01-03 03:29:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:29:55.031348 | orchestrator | 2026-01-03 03:29:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:29:55.031476 | orchestrator | 2026-01-03 03:29:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:29:58.075296 | orchestrator | 2026-01-03 03:29:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:29:58.077696 | orchestrator | 2026-01-03 03:29:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:29:58.077778 | orchestrator | 2026-01-03 03:29:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:01.118107 | orchestrator | 2026-01-03 03:30:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:30:01.119652 | orchestrator | 2026-01-03 03:30:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:30:01.119689 | orchestrator | 2026-01-03 03:30:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:04.174318 | orchestrator | 2026-01-03 03:30:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:30:04.175073 | orchestrator | 2026-01-03 03:30:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:30:04.175168 | orchestrator | 2026-01-03 03:30:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:07.225866 | orchestrator | 2026-01-03 03:30:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:30:07.227068 | orchestrator | 2026-01-03 03:30:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:30:07.227112 | orchestrator | 2026-01-03 03:30:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:10.275827 | orchestrator | 2026-01-03 03:30:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:30:10.276622 | orchestrator | 2026-01-03 03:30:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:30:10.276663 | orchestrator | 2026-01-03 03:30:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:13.323509 | orchestrator | 2026-01-03 03:30:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:30:13.323882 | orchestrator | 2026-01-03 03:30:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:30:13.323917 | orchestrator | 2026-01-03 03:30:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:16.378735 | orchestrator | 2026-01-03 03:30:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:30:16.379521 | orchestrator | 2026-01-03 03:30:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:30:16.379566 | orchestrator | 2026-01-03 03:30:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:19.432515 | orchestrator | 2026-01-03 03:30:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:30:19.433047 | orchestrator | 2026-01-03 03:30:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:30:19.433166 | orchestrator | 2026-01-03 03:30:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:22.474264 | orchestrator | 2026-01-03 03:30:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:30:22.476051 | orchestrator | 2026-01-03 03:30:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:30:22.476098 | orchestrator | 2026-01-03 03:30:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:25.523227 | orchestrator | 2026-01-03 03:30:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:30:25.525029 | orchestrator | 2026-01-03 03:30:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:30:25.525116 | orchestrator | 2026-01-03 03:30:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:28.574931 | orchestrator | 2026-01-03 03:30:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:30:28.576030 | orchestrator | 2026-01-03 03:30:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:30:28.576067 | orchestrator | 2026-01-03 03:30:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:31.630106 | orchestrator | 2026-01-03 03:30:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:30:31.630228 | orchestrator | 2026-01-03 03:30:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:30:31.630273 | orchestrator | 2026-01-03 03:30:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:34.681809 | orchestrator | 2026-01-03 03:30:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:30:34.682537 | orchestrator | 2026-01-03 03:30:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:30:34.683051 | orchestrator | 2026-01-03 03:30:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:37.718887 | orchestrator | 2026-01-03 03:30:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:30:37.721592 | orchestrator | 2026-01-03 03:30:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:30:37.721659 | orchestrator | 2026-01-03 03:30:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:40.769815 | orchestrator | 2026-01-03 03:30:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:30:40.771763 | orchestrator | 2026-01-03 03:30:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:30:40.771848 | orchestrator | 2026-01-03 03:30:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:43.820024 | orchestrator | 2026-01-03 03:30:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:30:43.821159 | orchestrator | 2026-01-03 03:30:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:30:43.821183 | orchestrator | 2026-01-03 03:30:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:46.864787 | orchestrator | 2026-01-03 03:30:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:30:46.866334 | orchestrator | 2026-01-03 03:30:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:30:46.866362 | orchestrator | 2026-01-03 03:30:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:49.916952 | orchestrator | 2026-01-03 03:30:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:30:49.918771 | orchestrator | 2026-01-03 03:30:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:30:49.918813 | orchestrator | 2026-01-03 03:30:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:52.960680 | orchestrator | 2026-01-03 03:30:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:30:52.961050 | orchestrator | 2026-01-03 03:30:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:30:52.961151 | orchestrator | 2026-01-03 03:30:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:56.001301 | orchestrator | 2026-01-03 03:30:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:30:56.002833 | orchestrator | 2026-01-03 03:30:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:30:56.002890 | orchestrator | 2026-01-03 03:30:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:30:59.048822 | orchestrator | 2026-01-03 03:30:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:30:59.051571 | orchestrator | 2026-01-03 03:30:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:30:59.051691 | orchestrator | 2026-01-03 03:30:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:02.099353 | orchestrator | 2026-01-03 03:31:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:31:02.100722 | orchestrator | 2026-01-03 03:31:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:31:02.100864 | orchestrator | 2026-01-03 03:31:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:05.152199 | orchestrator | 2026-01-03 03:31:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:31:05.153075 | orchestrator | 2026-01-03 03:31:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:31:05.153134 | orchestrator | 2026-01-03 03:31:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:08.198108 | orchestrator | 2026-01-03 03:31:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:31:08.198404 | orchestrator | 2026-01-03 03:31:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:31:08.198535 | orchestrator | 2026-01-03 03:31:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:11.249080 | orchestrator | 2026-01-03 03:31:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:31:11.251478 | orchestrator | 2026-01-03 03:31:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:31:11.251651 | orchestrator | 2026-01-03 03:31:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:14.300547 | orchestrator | 2026-01-03 03:31:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:31:14.301401 | orchestrator | 2026-01-03 03:31:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:31:14.301553 | orchestrator | 2026-01-03 03:31:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:17.347915 | orchestrator | 2026-01-03 03:31:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:31:17.349733 | orchestrator | 2026-01-03 03:31:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:31:17.349868 | orchestrator | 2026-01-03 03:31:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:20.400561 | orchestrator | 2026-01-03 03:31:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:31:20.400815 | orchestrator | 2026-01-03 03:31:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:31:20.400940 | orchestrator | 2026-01-03 03:31:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:23.455517 | orchestrator | 2026-01-03 03:31:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:31:23.456166 | orchestrator | 2026-01-03 03:31:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:31:23.456233 | orchestrator | 2026-01-03 03:31:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:26.501323 | orchestrator | 2026-01-03 03:31:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:31:26.501593 | orchestrator | 2026-01-03 03:31:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:31:26.501646 | orchestrator | 2026-01-03 03:31:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:29.547655 | orchestrator | 2026-01-03 03:31:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:31:29.548548 | orchestrator | 2026-01-03 03:31:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:31:29.548638 | orchestrator | 2026-01-03 03:31:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:32.595012 | orchestrator | 2026-01-03 03:31:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:31:32.596455 | orchestrator | 2026-01-03 03:31:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:31:32.596679 | orchestrator | 2026-01-03 03:31:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:35.644497 | orchestrator | 2026-01-03 03:31:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:31:35.645484 | orchestrator | 2026-01-03 03:31:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:31:35.645519 | orchestrator | 2026-01-03 03:31:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:38.688743 | orchestrator | 2026-01-03 03:31:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:31:38.691697 | orchestrator | 2026-01-03 03:31:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:31:38.691859 | orchestrator | 2026-01-03 03:31:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:41.732975 | orchestrator | 2026-01-03 03:31:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:31:41.734351 | orchestrator | 2026-01-03 03:31:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:31:41.734405 | orchestrator | 2026-01-03 03:31:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:44.787718 | orchestrator | 2026-01-03 03:31:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:31:44.788167 | orchestrator | 2026-01-03 03:31:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:31:44.788190 | orchestrator | 2026-01-03 03:31:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:47.844794 | orchestrator | 2026-01-03 03:31:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:31:47.844933 | orchestrator | 2026-01-03 03:31:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:31:47.844961 | orchestrator | 2026-01-03 03:31:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:50.894362 | orchestrator | 2026-01-03 03:31:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:31:50.896692 | orchestrator | 2026-01-03 03:31:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:31:50.896782 | orchestrator | 2026-01-03 03:31:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:53.948568 | orchestrator | 2026-01-03 03:31:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:31:53.948673 | orchestrator | 2026-01-03 03:31:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:31:53.948689 | orchestrator | 2026-01-03 03:31:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:31:56.997243 | orchestrator | 2026-01-03 03:31:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:31:56.998548 | orchestrator | 2026-01-03 03:31:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:31:56.998699 | orchestrator | 2026-01-03 03:31:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:00.048885 | orchestrator | 2026-01-03 03:32:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:32:00.048982 | orchestrator | 2026-01-03 03:32:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:32:00.048991 | orchestrator | 2026-01-03 03:32:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:03.096491 | orchestrator | 2026-01-03 03:32:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:32:03.096723 | orchestrator | 2026-01-03 03:32:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:32:03.096742 | orchestrator | 2026-01-03 03:32:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:06.144832 | orchestrator | 2026-01-03 03:32:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:32:06.146579 | orchestrator | 2026-01-03 03:32:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:32:06.146865 | orchestrator | 2026-01-03 03:32:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:09.190511 | orchestrator | 2026-01-03 03:32:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:32:09.190911 | orchestrator | 2026-01-03 03:32:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:32:09.190953 | orchestrator | 2026-01-03 03:32:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:12.232338 | orchestrator | 2026-01-03 03:32:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:32:12.233636 | orchestrator | 2026-01-03 03:32:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:32:12.233815 | orchestrator | 2026-01-03 03:32:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:15.279411 | orchestrator | 2026-01-03 03:32:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:32:15.281708 | orchestrator | 2026-01-03 03:32:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:32:15.281808 | orchestrator | 2026-01-03 03:32:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:18.332242 | orchestrator | 2026-01-03 03:32:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:32:18.333646 | orchestrator | 2026-01-03 03:32:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:32:18.333680 | orchestrator | 2026-01-03 03:32:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:21.382116 | orchestrator | 2026-01-03 03:32:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:32:21.383034 | orchestrator | 2026-01-03 03:32:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:32:21.383145 | orchestrator | 2026-01-03 03:32:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:24.431400 | orchestrator | 2026-01-03 03:32:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:32:24.433548 | orchestrator | 2026-01-03 03:32:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:32:24.433611 | orchestrator | 2026-01-03 03:32:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:27.477865 | orchestrator | 2026-01-03 03:32:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:32:27.479591 | orchestrator | 2026-01-03 03:32:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:32:27.479646 | orchestrator | 2026-01-03 03:32:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:30.528132 | orchestrator | 2026-01-03 03:32:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:32:30.529984 | orchestrator | 2026-01-03 03:32:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:32:30.530091 | orchestrator | 2026-01-03 03:32:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:33.572995 | orchestrator | 2026-01-03 03:32:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:32:33.575008 | orchestrator | 2026-01-03 03:32:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:32:33.575071 | orchestrator | 2026-01-03 03:32:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:36.621950 | orchestrator | 2026-01-03 03:32:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:32:36.622907 | orchestrator | 2026-01-03 03:32:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:32:36.622940 | orchestrator | 2026-01-03 03:32:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:39.667365 | orchestrator | 2026-01-03 03:32:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:32:39.668081 | orchestrator | 2026-01-03 03:32:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:32:39.668151 | orchestrator | 2026-01-03 03:32:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:42.713840 | orchestrator | 2026-01-03 03:32:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:32:42.715716 | orchestrator | 2026-01-03 03:32:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:32:42.715768 | orchestrator | 2026-01-03 03:32:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:45.764451 | orchestrator | 2026-01-03 03:32:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:32:45.766575 | orchestrator | 2026-01-03 03:32:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:32:45.766631 | orchestrator | 2026-01-03 03:32:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:48.809168 | orchestrator | 2026-01-03 03:32:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:32:48.810575 | orchestrator | 2026-01-03 03:32:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:32:48.810806 | orchestrator | 2026-01-03 03:32:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:51.856562 | orchestrator | 2026-01-03 03:32:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:32:51.857633 | orchestrator | 2026-01-03 03:32:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:32:51.857669 | orchestrator | 2026-01-03 03:32:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:54.901291 | orchestrator | 2026-01-03 03:32:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:32:54.903720 | orchestrator | 2026-01-03 03:32:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:32:54.903787 | orchestrator | 2026-01-03 03:32:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:32:57.951336 | orchestrator | 2026-01-03 03:32:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:32:57.952182 | orchestrator | 2026-01-03 03:32:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:32:57.952206 | orchestrator | 2026-01-03 03:32:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:01.001683 | orchestrator | 2026-01-03 03:33:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:33:01.005687 | orchestrator | 2026-01-03 03:33:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:33:01.005797 | orchestrator | 2026-01-03 03:33:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:04.048308 | orchestrator | 2026-01-03 03:33:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:33:04.049434 | orchestrator | 2026-01-03 03:33:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:33:04.049472 | orchestrator | 2026-01-03 03:33:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:07.100975 | orchestrator | 2026-01-03 03:33:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:33:07.102451 | orchestrator | 2026-01-03 03:33:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:33:07.102616 | orchestrator | 2026-01-03 03:33:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:10.144394 | orchestrator | 2026-01-03 03:33:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:33:10.146209 | orchestrator | 2026-01-03 03:33:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:33:10.146380 | orchestrator | 2026-01-03 03:33:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:13.193417 | orchestrator | 2026-01-03 03:33:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:33:13.195593 | orchestrator | 2026-01-03 03:33:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:33:13.195662 | orchestrator | 2026-01-03 03:33:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:16.235494 | orchestrator | 2026-01-03 03:33:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:33:16.237878 | orchestrator | 2026-01-03 03:33:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:33:16.238057 | orchestrator | 2026-01-03 03:33:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:19.290063 | orchestrator | 2026-01-03 03:33:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:33:19.291846 | orchestrator | 2026-01-03 03:33:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:33:19.291918 | orchestrator | 2026-01-03 03:33:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:22.343001 | orchestrator | 2026-01-03 03:33:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:33:22.346844 | orchestrator | 2026-01-03 03:33:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:33:22.346908 | orchestrator | 2026-01-03 03:33:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:25.402205 | orchestrator | 2026-01-03 03:33:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:33:25.406776 | orchestrator | 2026-01-03 03:33:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:33:25.406901 | orchestrator | 2026-01-03 03:33:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:28.459303 | orchestrator | 2026-01-03 03:33:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:33:28.461426 | orchestrator | 2026-01-03 03:33:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:33:28.461578 | orchestrator | 2026-01-03 03:33:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:31.515341 | orchestrator | 2026-01-03 03:33:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:33:31.518304 | orchestrator | 2026-01-03 03:33:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:33:31.518388 | orchestrator | 2026-01-03 03:33:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:34.564642 | orchestrator | 2026-01-03 03:33:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:33:34.566171 | orchestrator | 2026-01-03 03:33:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:33:34.566246 | orchestrator | 2026-01-03 03:33:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:37.615413 | orchestrator | 2026-01-03 03:33:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:33:37.616948 | orchestrator | 2026-01-03 03:33:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:33:37.617186 | orchestrator | 2026-01-03 03:33:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:40.668162 | orchestrator | 2026-01-03 03:33:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:33:40.670514 | orchestrator | 2026-01-03 03:33:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:33:40.670654 | orchestrator | 2026-01-03 03:33:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:43.719750 | orchestrator | 2026-01-03 03:33:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:33:43.723547 | orchestrator | 2026-01-03 03:33:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:33:43.723749 | orchestrator | 2026-01-03 03:33:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:46.770457 | orchestrator | 2026-01-03 03:33:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:33:46.771899 | orchestrator | 2026-01-03 03:33:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:33:46.772015 | orchestrator | 2026-01-03 03:33:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:49.814190 | orchestrator | 2026-01-03 03:33:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:33:49.816166 | orchestrator | 2026-01-03 03:33:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:33:49.816212 | orchestrator | 2026-01-03 03:33:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:52.860067 | orchestrator | 2026-01-03 03:33:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:33:52.862430 | orchestrator | 2026-01-03 03:33:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:33:52.862537 | orchestrator | 2026-01-03 03:33:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:55.914957 | orchestrator | 2026-01-03 03:33:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:33:55.915631 | orchestrator | 2026-01-03 03:33:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:33:55.915653 | orchestrator | 2026-01-03 03:33:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:33:58.959523 | orchestrator | 2026-01-03 03:33:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:33:58.961453 | orchestrator | 2026-01-03 03:33:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:33:58.961523 | orchestrator | 2026-01-03 03:33:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:02.009442 | orchestrator | 2026-01-03 03:34:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:34:02.012740 | orchestrator | 2026-01-03 03:34:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:34:02.012834 | orchestrator | 2026-01-03 03:34:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:05.062518 | orchestrator | 2026-01-03 03:34:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:34:05.063269 | orchestrator | 2026-01-03 03:34:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:34:05.063338 | orchestrator | 2026-01-03 03:34:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:08.105682 | orchestrator | 2026-01-03 03:34:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:34:08.107540 | orchestrator | 2026-01-03 03:34:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:34:08.110486 | orchestrator | 2026-01-03 03:34:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:11.158840 | orchestrator | 2026-01-03 03:34:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:34:11.160332 | orchestrator | 2026-01-03 03:34:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:34:11.160426 | orchestrator | 2026-01-03 03:34:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:14.215755 | orchestrator | 2026-01-03 03:34:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:34:14.216377 | orchestrator | 2026-01-03 03:34:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:34:14.216817 | orchestrator | 2026-01-03 03:34:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:17.271538 | orchestrator | 2026-01-03 03:34:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:34:17.273503 | orchestrator | 2026-01-03 03:34:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:34:17.273553 | orchestrator | 2026-01-03 03:34:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:20.322628 | orchestrator | 2026-01-03 03:34:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:34:20.324443 | orchestrator | 2026-01-03 03:34:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:34:20.324505 | orchestrator | 2026-01-03 03:34:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:23.376411 | orchestrator | 2026-01-03 03:34:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:34:23.378474 | orchestrator | 2026-01-03 03:34:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:34:23.378768 | orchestrator | 2026-01-03 03:34:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:26.422473 | orchestrator | 2026-01-03 03:34:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:34:26.425238 | orchestrator | 2026-01-03 03:34:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:34:26.425362 | orchestrator | 2026-01-03 03:34:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:29.471079 | orchestrator | 2026-01-03 03:34:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:34:29.473278 | orchestrator | 2026-01-03 03:34:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:34:29.473317 | orchestrator | 2026-01-03 03:34:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:32.519101 | orchestrator | 2026-01-03 03:34:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:34:32.522086 | orchestrator | 2026-01-03 03:34:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:34:32.522158 | orchestrator | 2026-01-03 03:34:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:35.566155 | orchestrator | 2026-01-03 03:34:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:34:35.568544 | orchestrator | 2026-01-03 03:34:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:34:35.568625 | orchestrator | 2026-01-03 03:34:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:38.614602 | orchestrator | 2026-01-03 03:34:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:34:38.616592 | orchestrator | 2026-01-03 03:34:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:34:38.616649 | orchestrator | 2026-01-03 03:34:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:41.663914 | orchestrator | 2026-01-03 03:34:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:34:41.665531 | orchestrator | 2026-01-03 03:34:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:34:41.665625 | orchestrator | 2026-01-03 03:34:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:44.707887 | orchestrator | 2026-01-03 03:34:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:34:44.710809 | orchestrator | 2026-01-03 03:34:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:34:44.710893 | orchestrator | 2026-01-03 03:34:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:47.761184 | orchestrator | 2026-01-03 03:34:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:34:47.762403 | orchestrator | 2026-01-03 03:34:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:34:47.762449 | orchestrator | 2026-01-03 03:34:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:50.812050 | orchestrator | 2026-01-03 03:34:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:34:50.813602 | orchestrator | 2026-01-03 03:34:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:34:50.813882 | orchestrator | 2026-01-03 03:34:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:53.862461 | orchestrator | 2026-01-03 03:34:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:34:53.864245 | orchestrator | 2026-01-03 03:34:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:34:53.864420 | orchestrator | 2026-01-03 03:34:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:56.915332 | orchestrator | 2026-01-03 03:34:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:34:56.916410 | orchestrator | 2026-01-03 03:34:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:34:56.916499 | orchestrator | 2026-01-03 03:34:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:34:59.961865 | orchestrator | 2026-01-03 03:34:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:34:59.963655 | orchestrator | 2026-01-03 03:34:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:34:59.963697 | orchestrator | 2026-01-03 03:34:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:03.009666 | orchestrator | 2026-01-03 03:35:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:35:03.013013 | orchestrator | 2026-01-03 03:35:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:35:03.013124 | orchestrator | 2026-01-03 03:35:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:06.058723 | orchestrator | 2026-01-03 03:35:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:35:06.059085 | orchestrator | 2026-01-03 03:35:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:35:06.059101 | orchestrator | 2026-01-03 03:35:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:09.094665 | orchestrator | 2026-01-03 03:35:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:35:09.095053 | orchestrator | 2026-01-03 03:35:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:35:09.095068 | orchestrator | 2026-01-03 03:35:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:12.152025 | orchestrator | 2026-01-03 03:35:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:35:12.154248 | orchestrator | 2026-01-03 03:35:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:35:12.154296 | orchestrator | 2026-01-03 03:35:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:15.198851 | orchestrator | 2026-01-03 03:35:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:35:15.201341 | orchestrator | 2026-01-03 03:35:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:35:15.201418 | orchestrator | 2026-01-03 03:35:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:18.248525 | orchestrator | 2026-01-03 03:35:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:35:18.249488 | orchestrator | 2026-01-03 03:35:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:35:18.249597 | orchestrator | 2026-01-03 03:35:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:21.290075 | orchestrator | 2026-01-03 03:35:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:35:21.292114 | orchestrator | 2026-01-03 03:35:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:35:21.292213 | orchestrator | 2026-01-03 03:35:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:24.340075 | orchestrator | 2026-01-03 03:35:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:35:24.343234 | orchestrator | 2026-01-03 03:35:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:35:24.343313 | orchestrator | 2026-01-03 03:35:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:27.390861 | orchestrator | 2026-01-03 03:35:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:35:27.393149 | orchestrator | 2026-01-03 03:35:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:35:27.393188 | orchestrator | 2026-01-03 03:35:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:30.437954 | orchestrator | 2026-01-03 03:35:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:35:30.439727 | orchestrator | 2026-01-03 03:35:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:35:30.439777 | orchestrator | 2026-01-03 03:35:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:33.486644 | orchestrator | 2026-01-03 03:35:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:35:33.488431 | orchestrator | 2026-01-03 03:35:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:35:33.488537 | orchestrator | 2026-01-03 03:35:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:36.534171 | orchestrator | 2026-01-03 03:35:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:35:36.535386 | orchestrator | 2026-01-03 03:35:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:35:36.535421 | orchestrator | 2026-01-03 03:35:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:39.582560 | orchestrator | 2026-01-03 03:35:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:35:39.583949 | orchestrator | 2026-01-03 03:35:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:35:39.583988 | orchestrator | 2026-01-03 03:35:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:42.639650 | orchestrator | 2026-01-03 03:35:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:35:42.641250 | orchestrator | 2026-01-03 03:35:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:35:42.641309 | orchestrator | 2026-01-03 03:35:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:45.685198 | orchestrator | 2026-01-03 03:35:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:35:45.687273 | orchestrator | 2026-01-03 03:35:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:35:45.687493 | orchestrator | 2026-01-03 03:35:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:48.741095 | orchestrator | 2026-01-03 03:35:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:35:48.743201 | orchestrator | 2026-01-03 03:35:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:35:48.743356 | orchestrator | 2026-01-03 03:35:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:51.787938 | orchestrator | 2026-01-03 03:35:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:35:51.789469 | orchestrator | 2026-01-03 03:35:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:35:51.789555 | orchestrator | 2026-01-03 03:35:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:54.837741 | orchestrator | 2026-01-03 03:35:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:35:54.839237 | orchestrator | 2026-01-03 03:35:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:35:54.839292 | orchestrator | 2026-01-03 03:35:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:35:57.889366 | orchestrator | 2026-01-03 03:35:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:35:57.891881 | orchestrator | 2026-01-03 03:35:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:35:57.892173 | orchestrator | 2026-01-03 03:35:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:00.938199 | orchestrator | 2026-01-03 03:36:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:36:00.939173 | orchestrator | 2026-01-03 03:36:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:36:00.939303 | orchestrator | 2026-01-03 03:36:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:03.986789 | orchestrator | 2026-01-03 03:36:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:36:03.990521 | orchestrator | 2026-01-03 03:36:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:36:03.990578 | orchestrator | 2026-01-03 03:36:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:07.032068 | orchestrator | 2026-01-03 03:36:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:36:07.033335 | orchestrator | 2026-01-03 03:36:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:36:07.033377 | orchestrator | 2026-01-03 03:36:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:10.076637 | orchestrator | 2026-01-03 03:36:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:36:10.080035 | orchestrator | 2026-01-03 03:36:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:36:10.080235 | orchestrator | 2026-01-03 03:36:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:13.131194 | orchestrator | 2026-01-03 03:36:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:36:13.133521 | orchestrator | 2026-01-03 03:36:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:36:13.133572 | orchestrator | 2026-01-03 03:36:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:16.179894 | orchestrator | 2026-01-03 03:36:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:36:16.183188 | orchestrator | 2026-01-03 03:36:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:36:16.183264 | orchestrator | 2026-01-03 03:36:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:19.230972 | orchestrator | 2026-01-03 03:36:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:36:19.232313 | orchestrator | 2026-01-03 03:36:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:36:19.232344 | orchestrator | 2026-01-03 03:36:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:22.283696 | orchestrator | 2026-01-03 03:36:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:36:22.286532 | orchestrator | 2026-01-03 03:36:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:36:22.286642 | orchestrator | 2026-01-03 03:36:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:25.334562 | orchestrator | 2026-01-03 03:36:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:36:25.336684 | orchestrator | 2026-01-03 03:36:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:36:25.336996 | orchestrator | 2026-01-03 03:36:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:28.383560 | orchestrator | 2026-01-03 03:36:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:36:28.384822 | orchestrator | 2026-01-03 03:36:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:36:28.384960 | orchestrator | 2026-01-03 03:36:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:31.427214 | orchestrator | 2026-01-03 03:36:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:36:31.429564 | orchestrator | 2026-01-03 03:36:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:36:31.429624 | orchestrator | 2026-01-03 03:36:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:34.479010 | orchestrator | 2026-01-03 03:36:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:36:34.481087 | orchestrator | 2026-01-03 03:36:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:36:34.481137 | orchestrator | 2026-01-03 03:36:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:37.533383 | orchestrator | 2026-01-03 03:36:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:36:37.534715 | orchestrator | 2026-01-03 03:36:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:36:37.534773 | orchestrator | 2026-01-03 03:36:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:40.582570 | orchestrator | 2026-01-03 03:36:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:36:40.586083 | orchestrator | 2026-01-03 03:36:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:36:40.586162 | orchestrator | 2026-01-03 03:36:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:43.635295 | orchestrator | 2026-01-03 03:36:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:36:43.636663 | orchestrator | 2026-01-03 03:36:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:36:43.636713 | orchestrator | 2026-01-03 03:36:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:46.684518 | orchestrator | 2026-01-03 03:36:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:36:46.685825 | orchestrator | 2026-01-03 03:36:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:36:46.686070 | orchestrator | 2026-01-03 03:36:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:49.733757 | orchestrator | 2026-01-03 03:36:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:36:49.734537 | orchestrator | 2026-01-03 03:36:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:36:49.734587 | orchestrator | 2026-01-03 03:36:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:52.783605 | orchestrator | 2026-01-03 03:36:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:36:52.784965 | orchestrator | 2026-01-03 03:36:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:36:52.785032 | orchestrator | 2026-01-03 03:36:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:55.829237 | orchestrator | 2026-01-03 03:36:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:36:55.831729 | orchestrator | 2026-01-03 03:36:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:36:55.831817 | orchestrator | 2026-01-03 03:36:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:36:58.879222 | orchestrator | 2026-01-03 03:36:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:36:58.881369 | orchestrator | 2026-01-03 03:36:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:36:58.881412 | orchestrator | 2026-01-03 03:36:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:01.937320 | orchestrator | 2026-01-03 03:37:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:37:01.939572 | orchestrator | 2026-01-03 03:37:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:37:01.939640 | orchestrator | 2026-01-03 03:37:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:04.985545 | orchestrator | 2026-01-03 03:37:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:37:04.989785 | orchestrator | 2026-01-03 03:37:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:37:04.989984 | orchestrator | 2026-01-03 03:37:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:08.037241 | orchestrator | 2026-01-03 03:37:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:37:08.040452 | orchestrator | 2026-01-03 03:37:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:37:08.041404 | orchestrator | 2026-01-03 03:37:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:11.094338 | orchestrator | 2026-01-03 03:37:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:37:11.095879 | orchestrator | 2026-01-03 03:37:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:37:11.095923 | orchestrator | 2026-01-03 03:37:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:14.144357 | orchestrator | 2026-01-03 03:37:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:37:14.147127 | orchestrator | 2026-01-03 03:37:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:37:14.147187 | orchestrator | 2026-01-03 03:37:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:17.197088 | orchestrator | 2026-01-03 03:37:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:37:17.198394 | orchestrator | 2026-01-03 03:37:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:37:17.198464 | orchestrator | 2026-01-03 03:37:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:20.242763 | orchestrator | 2026-01-03 03:37:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:37:20.245172 | orchestrator | 2026-01-03 03:37:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:37:20.245612 | orchestrator | 2026-01-03 03:37:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:23.290682 | orchestrator | 2026-01-03 03:37:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:37:23.291640 | orchestrator | 2026-01-03 03:37:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:37:23.291688 | orchestrator | 2026-01-03 03:37:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:26.339451 | orchestrator | 2026-01-03 03:37:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:37:26.342237 | orchestrator | 2026-01-03 03:37:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:37:26.342305 | orchestrator | 2026-01-03 03:37:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:29.387108 | orchestrator | 2026-01-03 03:37:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:37:29.389026 | orchestrator | 2026-01-03 03:37:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:37:29.389167 | orchestrator | 2026-01-03 03:37:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:32.433987 | orchestrator | 2026-01-03 03:37:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:37:32.435701 | orchestrator | 2026-01-03 03:37:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:37:32.435853 | orchestrator | 2026-01-03 03:37:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:35.481011 | orchestrator | 2026-01-03 03:37:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:37:35.483475 | orchestrator | 2026-01-03 03:37:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:37:35.483533 | orchestrator | 2026-01-03 03:37:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:38.536784 | orchestrator | 2026-01-03 03:37:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:37:38.540089 | orchestrator | 2026-01-03 03:37:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:37:38.540170 | orchestrator | 2026-01-03 03:37:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:41.583744 | orchestrator | 2026-01-03 03:37:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:37:41.585521 | orchestrator | 2026-01-03 03:37:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:37:41.585592 | orchestrator | 2026-01-03 03:37:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:44.637060 | orchestrator | 2026-01-03 03:37:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:37:44.639304 | orchestrator | 2026-01-03 03:37:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:37:44.639365 | orchestrator | 2026-01-03 03:37:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:47.691153 | orchestrator | 2026-01-03 03:37:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:37:47.693066 | orchestrator | 2026-01-03 03:37:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:37:47.693106 | orchestrator | 2026-01-03 03:37:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:50.741850 | orchestrator | 2026-01-03 03:37:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:37:50.743612 | orchestrator | 2026-01-03 03:37:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:37:50.743691 | orchestrator | 2026-01-03 03:37:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:53.792466 | orchestrator | 2026-01-03 03:37:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:37:53.794694 | orchestrator | 2026-01-03 03:37:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:37:53.794816 | orchestrator | 2026-01-03 03:37:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:56.835620 | orchestrator | 2026-01-03 03:37:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:37:56.838139 | orchestrator | 2026-01-03 03:37:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:37:56.838197 | orchestrator | 2026-01-03 03:37:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:37:59.883906 | orchestrator | 2026-01-03 03:37:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:37:59.885613 | orchestrator | 2026-01-03 03:37:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:37:59.886094 | orchestrator | 2026-01-03 03:37:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:02.938239 | orchestrator | 2026-01-03 03:38:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:38:02.940083 | orchestrator | 2026-01-03 03:38:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:38:02.940193 | orchestrator | 2026-01-03 03:38:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:05.985225 | orchestrator | 2026-01-03 03:38:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:38:05.986884 | orchestrator | 2026-01-03 03:38:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:38:05.986996 | orchestrator | 2026-01-03 03:38:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:09.032817 | orchestrator | 2026-01-03 03:38:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:38:09.033217 | orchestrator | 2026-01-03 03:38:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:38:09.033258 | orchestrator | 2026-01-03 03:38:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:12.076185 | orchestrator | 2026-01-03 03:38:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:38:12.077966 | orchestrator | 2026-01-03 03:38:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:38:12.078150 | orchestrator | 2026-01-03 03:38:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:15.122885 | orchestrator | 2026-01-03 03:38:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:38:15.123331 | orchestrator | 2026-01-03 03:38:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:38:15.123422 | orchestrator | 2026-01-03 03:38:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:18.173113 | orchestrator | 2026-01-03 03:38:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:38:18.175399 | orchestrator | 2026-01-03 03:38:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:38:18.175483 | orchestrator | 2026-01-03 03:38:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:21.223901 | orchestrator | 2026-01-03 03:38:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:38:21.225906 | orchestrator | 2026-01-03 03:38:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:38:21.225955 | orchestrator | 2026-01-03 03:38:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:24.271726 | orchestrator | 2026-01-03 03:38:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:38:24.274246 | orchestrator | 2026-01-03 03:38:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:38:24.274303 | orchestrator | 2026-01-03 03:38:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:27.321416 | orchestrator | 2026-01-03 03:38:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:38:27.324536 | orchestrator | 2026-01-03 03:38:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:38:27.324600 | orchestrator | 2026-01-03 03:38:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:30.368624 | orchestrator | 2026-01-03 03:38:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:38:30.371241 | orchestrator | 2026-01-03 03:38:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:38:30.371314 | orchestrator | 2026-01-03 03:38:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:33.421563 | orchestrator | 2026-01-03 03:38:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:38:33.423466 | orchestrator | 2026-01-03 03:38:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:38:33.423545 | orchestrator | 2026-01-03 03:38:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:36.465262 | orchestrator | 2026-01-03 03:38:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:38:36.468659 | orchestrator | 2026-01-03 03:38:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:38:36.468737 | orchestrator | 2026-01-03 03:38:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:39.511501 | orchestrator | 2026-01-03 03:38:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:38:39.512638 | orchestrator | 2026-01-03 03:38:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:38:39.512724 | orchestrator | 2026-01-03 03:38:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:42.558506 | orchestrator | 2026-01-03 03:38:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:38:42.561200 | orchestrator | 2026-01-03 03:38:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:38:42.561399 | orchestrator | 2026-01-03 03:38:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:45.603595 | orchestrator | 2026-01-03 03:38:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:38:45.604988 | orchestrator | 2026-01-03 03:38:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:38:45.605035 | orchestrator | 2026-01-03 03:38:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:48.656981 | orchestrator | 2026-01-03 03:38:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:38:48.659485 | orchestrator | 2026-01-03 03:38:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:38:48.659538 | orchestrator | 2026-01-03 03:38:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:51.705141 | orchestrator | 2026-01-03 03:38:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:38:51.707207 | orchestrator | 2026-01-03 03:38:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:38:51.707255 | orchestrator | 2026-01-03 03:38:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:54.752291 | orchestrator | 2026-01-03 03:38:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:38:54.754810 | orchestrator | 2026-01-03 03:38:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:38:54.754861 | orchestrator | 2026-01-03 03:38:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:38:57.801849 | orchestrator | 2026-01-03 03:38:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:38:57.804206 | orchestrator | 2026-01-03 03:38:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:38:57.804334 | orchestrator | 2026-01-03 03:38:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:00.854161 | orchestrator | 2026-01-03 03:39:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:39:00.856190 | orchestrator | 2026-01-03 03:39:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:39:00.856341 | orchestrator | 2026-01-03 03:39:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:03.906004 | orchestrator | 2026-01-03 03:39:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:39:03.906540 | orchestrator | 2026-01-03 03:39:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:39:03.906566 | orchestrator | 2026-01-03 03:39:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:06.954874 | orchestrator | 2026-01-03 03:39:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:39:06.957353 | orchestrator | 2026-01-03 03:39:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:39:06.957409 | orchestrator | 2026-01-03 03:39:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:10.005932 | orchestrator | 2026-01-03 03:39:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:39:10.008449 | orchestrator | 2026-01-03 03:39:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:39:10.008582 | orchestrator | 2026-01-03 03:39:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:13.060308 | orchestrator | 2026-01-03 03:39:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:39:13.061936 | orchestrator | 2026-01-03 03:39:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:39:13.061996 | orchestrator | 2026-01-03 03:39:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:16.113011 | orchestrator | 2026-01-03 03:39:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:39:16.114573 | orchestrator | 2026-01-03 03:39:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:39:16.114696 | orchestrator | 2026-01-03 03:39:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:19.164682 | orchestrator | 2026-01-03 03:39:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:39:19.165757 | orchestrator | 2026-01-03 03:39:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:39:19.165879 | orchestrator | 2026-01-03 03:39:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:22.216345 | orchestrator | 2026-01-03 03:39:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:39:22.219683 | orchestrator | 2026-01-03 03:39:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:39:22.219752 | orchestrator | 2026-01-03 03:39:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:25.267399 | orchestrator | 2026-01-03 03:39:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:39:25.267870 | orchestrator | 2026-01-03 03:39:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:39:25.267936 | orchestrator | 2026-01-03 03:39:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:28.318381 | orchestrator | 2026-01-03 03:39:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:39:28.321564 | orchestrator | 2026-01-03 03:39:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:39:28.321635 | orchestrator | 2026-01-03 03:39:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:31.377182 | orchestrator | 2026-01-03 03:39:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:39:31.378193 | orchestrator | 2026-01-03 03:39:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:39:31.378500 | orchestrator | 2026-01-03 03:39:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:34.431304 | orchestrator | 2026-01-03 03:39:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:39:34.432456 | orchestrator | 2026-01-03 03:39:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:39:34.432480 | orchestrator | 2026-01-03 03:39:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:37.474676 | orchestrator | 2026-01-03 03:39:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:39:37.477062 | orchestrator | 2026-01-03 03:39:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:39:37.477127 | orchestrator | 2026-01-03 03:39:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:40.520853 | orchestrator | 2026-01-03 03:39:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:39:40.522330 | orchestrator | 2026-01-03 03:39:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:39:40.522384 | orchestrator | 2026-01-03 03:39:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:43.565411 | orchestrator | 2026-01-03 03:39:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:39:43.568693 | orchestrator | 2026-01-03 03:39:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:39:43.568769 | orchestrator | 2026-01-03 03:39:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:46.614205 | orchestrator | 2026-01-03 03:39:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:39:46.615426 | orchestrator | 2026-01-03 03:39:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:39:46.615444 | orchestrator | 2026-01-03 03:39:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:49.672435 | orchestrator | 2026-01-03 03:39:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:39:49.674381 | orchestrator | 2026-01-03 03:39:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:39:49.674481 | orchestrator | 2026-01-03 03:39:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:52.722791 | orchestrator | 2026-01-03 03:39:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:39:52.726144 | orchestrator | 2026-01-03 03:39:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:39:52.726280 | orchestrator | 2026-01-03 03:39:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:55.775883 | orchestrator | 2026-01-03 03:39:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:39:55.778309 | orchestrator | 2026-01-03 03:39:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:39:55.778440 | orchestrator | 2026-01-03 03:39:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:39:58.831875 | orchestrator | 2026-01-03 03:39:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:39:58.834147 | orchestrator | 2026-01-03 03:39:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:39:58.834216 | orchestrator | 2026-01-03 03:39:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:01.886327 | orchestrator | 2026-01-03 03:40:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:40:01.890404 | orchestrator | 2026-01-03 03:40:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:40:01.890513 | orchestrator | 2026-01-03 03:40:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:04.938087 | orchestrator | 2026-01-03 03:40:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:40:04.941509 | orchestrator | 2026-01-03 03:40:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:40:04.941579 | orchestrator | 2026-01-03 03:40:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:07.986198 | orchestrator | 2026-01-03 03:40:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:40:07.988520 | orchestrator | 2026-01-03 03:40:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:40:07.988552 | orchestrator | 2026-01-03 03:40:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:11.034581 | orchestrator | 2026-01-03 03:40:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:40:11.036239 | orchestrator | 2026-01-03 03:40:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:40:11.036274 | orchestrator | 2026-01-03 03:40:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:14.080961 | orchestrator | 2026-01-03 03:40:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:40:14.082262 | orchestrator | 2026-01-03 03:40:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:40:14.082331 | orchestrator | 2026-01-03 03:40:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:17.130979 | orchestrator | 2026-01-03 03:40:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:40:17.133006 | orchestrator | 2026-01-03 03:40:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:40:17.133071 | orchestrator | 2026-01-03 03:40:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:20.186421 | orchestrator | 2026-01-03 03:40:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:40:20.187587 | orchestrator | 2026-01-03 03:40:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:40:20.187755 | orchestrator | 2026-01-03 03:40:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:23.234718 | orchestrator | 2026-01-03 03:40:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:40:23.236951 | orchestrator | 2026-01-03 03:40:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:40:23.237238 | orchestrator | 2026-01-03 03:40:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:26.281783 | orchestrator | 2026-01-03 03:40:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:40:26.283344 | orchestrator | 2026-01-03 03:40:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:40:26.283386 | orchestrator | 2026-01-03 03:40:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:29.329931 | orchestrator | 2026-01-03 03:40:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:40:29.330279 | orchestrator | 2026-01-03 03:40:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:40:29.330375 | orchestrator | 2026-01-03 03:40:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:32.376567 | orchestrator | 2026-01-03 03:40:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:40:32.378159 | orchestrator | 2026-01-03 03:40:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:40:32.378204 | orchestrator | 2026-01-03 03:40:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:35.430515 | orchestrator | 2026-01-03 03:40:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:40:35.431065 | orchestrator | 2026-01-03 03:40:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:40:35.431309 | orchestrator | 2026-01-03 03:40:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:38.477459 | orchestrator | 2026-01-03 03:40:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:40:38.479290 | orchestrator | 2026-01-03 03:40:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:40:38.479326 | orchestrator | 2026-01-03 03:40:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:41.521443 | orchestrator | 2026-01-03 03:40:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:40:41.522387 | orchestrator | 2026-01-03 03:40:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:40:41.522428 | orchestrator | 2026-01-03 03:40:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:44.563593 | orchestrator | 2026-01-03 03:40:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:40:44.565886 | orchestrator | 2026-01-03 03:40:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:40:44.565995 | orchestrator | 2026-01-03 03:40:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:47.613860 | orchestrator | 2026-01-03 03:40:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:40:47.617609 | orchestrator | 2026-01-03 03:40:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:40:47.617810 | orchestrator | 2026-01-03 03:40:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:50.671318 | orchestrator | 2026-01-03 03:40:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:40:50.673628 | orchestrator | 2026-01-03 03:40:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:40:50.673711 | orchestrator | 2026-01-03 03:40:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:53.724443 | orchestrator | 2026-01-03 03:40:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:40:53.726466 | orchestrator | 2026-01-03 03:40:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:40:53.726555 | orchestrator | 2026-01-03 03:40:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:56.775742 | orchestrator | 2026-01-03 03:40:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:40:56.777632 | orchestrator | 2026-01-03 03:40:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:40:56.777675 | orchestrator | 2026-01-03 03:40:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:40:59.836183 | orchestrator | 2026-01-03 03:40:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:40:59.837688 | orchestrator | 2026-01-03 03:40:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:40:59.837724 | orchestrator | 2026-01-03 03:40:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:02.886816 | orchestrator | 2026-01-03 03:41:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:41:02.888129 | orchestrator | 2026-01-03 03:41:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:41:02.888308 | orchestrator | 2026-01-03 03:41:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:05.940994 | orchestrator | 2026-01-03 03:41:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:41:05.942428 | orchestrator | 2026-01-03 03:41:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:41:05.942505 | orchestrator | 2026-01-03 03:41:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:08.992220 | orchestrator | 2026-01-03 03:41:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:41:08.992855 | orchestrator | 2026-01-03 03:41:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:41:08.992894 | orchestrator | 2026-01-03 03:41:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:12.039529 | orchestrator | 2026-01-03 03:41:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:41:12.040971 | orchestrator | 2026-01-03 03:41:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:41:12.041184 | orchestrator | 2026-01-03 03:41:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:15.087529 | orchestrator | 2026-01-03 03:41:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:41:15.089357 | orchestrator | 2026-01-03 03:41:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:41:15.089477 | orchestrator | 2026-01-03 03:41:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:18.139103 | orchestrator | 2026-01-03 03:41:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:41:18.142652 | orchestrator | 2026-01-03 03:41:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:41:18.142712 | orchestrator | 2026-01-03 03:41:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:21.182612 | orchestrator | 2026-01-03 03:41:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:41:21.184004 | orchestrator | 2026-01-03 03:41:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:41:21.184059 | orchestrator | 2026-01-03 03:41:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:24.228951 | orchestrator | 2026-01-03 03:41:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:41:24.229540 | orchestrator | 2026-01-03 03:41:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:41:24.229666 | orchestrator | 2026-01-03 03:41:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:27.283471 | orchestrator | 2026-01-03 03:41:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:41:27.284719 | orchestrator | 2026-01-03 03:41:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:41:27.284757 | orchestrator | 2026-01-03 03:41:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:30.337983 | orchestrator | 2026-01-03 03:41:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:41:30.340083 | orchestrator | 2026-01-03 03:41:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:41:30.340144 | orchestrator | 2026-01-03 03:41:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:33.390503 | orchestrator | 2026-01-03 03:41:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:41:33.392932 | orchestrator | 2026-01-03 03:41:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:41:33.392968 | orchestrator | 2026-01-03 03:41:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:36.440896 | orchestrator | 2026-01-03 03:41:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:41:36.442809 | orchestrator | 2026-01-03 03:41:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:41:36.442866 | orchestrator | 2026-01-03 03:41:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:39.492428 | orchestrator | 2026-01-03 03:41:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:41:39.494862 | orchestrator | 2026-01-03 03:41:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:41:39.494900 | orchestrator | 2026-01-03 03:41:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:42.543276 | orchestrator | 2026-01-03 03:41:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:41:42.544554 | orchestrator | 2026-01-03 03:41:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:41:42.544603 | orchestrator | 2026-01-03 03:41:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:45.591527 | orchestrator | 2026-01-03 03:41:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:41:45.593401 | orchestrator | 2026-01-03 03:41:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:41:45.593482 | orchestrator | 2026-01-03 03:41:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:48.644696 | orchestrator | 2026-01-03 03:41:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:41:48.646267 | orchestrator | 2026-01-03 03:41:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:41:48.646355 | orchestrator | 2026-01-03 03:41:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:51.695145 | orchestrator | 2026-01-03 03:41:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:41:51.697260 | orchestrator | 2026-01-03 03:41:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:41:51.697475 | orchestrator | 2026-01-03 03:41:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:54.742697 | orchestrator | 2026-01-03 03:41:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:41:54.744150 | orchestrator | 2026-01-03 03:41:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:41:54.744407 | orchestrator | 2026-01-03 03:41:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:41:57.793487 | orchestrator | 2026-01-03 03:41:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:41:57.795018 | orchestrator | 2026-01-03 03:41:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:41:57.795200 | orchestrator | 2026-01-03 03:41:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:00.840556 | orchestrator | 2026-01-03 03:42:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:42:00.842960 | orchestrator | 2026-01-03 03:42:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:42:00.843006 | orchestrator | 2026-01-03 03:42:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:03.888223 | orchestrator | 2026-01-03 03:42:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:42:03.889879 | orchestrator | 2026-01-03 03:42:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:42:03.889974 | orchestrator | 2026-01-03 03:42:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:06.937096 | orchestrator | 2026-01-03 03:42:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:42:06.939850 | orchestrator | 2026-01-03 03:42:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:42:06.939922 | orchestrator | 2026-01-03 03:42:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:09.982687 | orchestrator | 2026-01-03 03:42:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:42:09.984678 | orchestrator | 2026-01-03 03:42:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:42:09.984727 | orchestrator | 2026-01-03 03:42:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:13.031974 | orchestrator | 2026-01-03 03:42:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:42:13.032789 | orchestrator | 2026-01-03 03:42:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:42:13.032832 | orchestrator | 2026-01-03 03:42:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:16.080752 | orchestrator | 2026-01-03 03:42:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:42:16.083016 | orchestrator | 2026-01-03 03:42:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:42:16.083115 | orchestrator | 2026-01-03 03:42:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:19.128566 | orchestrator | 2026-01-03 03:42:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:42:19.131405 | orchestrator | 2026-01-03 03:42:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:42:19.131605 | orchestrator | 2026-01-03 03:42:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:22.172511 | orchestrator | 2026-01-03 03:42:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:42:22.174480 | orchestrator | 2026-01-03 03:42:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:42:22.174527 | orchestrator | 2026-01-03 03:42:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:25.221585 | orchestrator | 2026-01-03 03:42:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:42:25.224085 | orchestrator | 2026-01-03 03:42:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:42:25.224197 | orchestrator | 2026-01-03 03:42:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:28.273737 | orchestrator | 2026-01-03 03:42:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:42:28.278565 | orchestrator | 2026-01-03 03:42:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:42:28.278647 | orchestrator | 2026-01-03 03:42:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:31.323606 | orchestrator | 2026-01-03 03:42:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:42:31.324964 | orchestrator | 2026-01-03 03:42:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:42:31.325044 | orchestrator | 2026-01-03 03:42:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:34.371893 | orchestrator | 2026-01-03 03:42:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:42:34.373159 | orchestrator | 2026-01-03 03:42:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:42:34.373253 | orchestrator | 2026-01-03 03:42:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:37.418580 | orchestrator | 2026-01-03 03:42:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:42:37.420323 | orchestrator | 2026-01-03 03:42:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:42:37.420400 | orchestrator | 2026-01-03 03:42:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:40.469474 | orchestrator | 2026-01-03 03:42:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:42:40.472215 | orchestrator | 2026-01-03 03:42:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:42:40.472288 | orchestrator | 2026-01-03 03:42:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:43.517951 | orchestrator | 2026-01-03 03:42:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:42:43.519746 | orchestrator | 2026-01-03 03:42:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:42:43.519792 | orchestrator | 2026-01-03 03:42:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:46.568826 | orchestrator | 2026-01-03 03:42:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:42:46.570272 | orchestrator | 2026-01-03 03:42:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:42:46.570374 | orchestrator | 2026-01-03 03:42:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:49.619324 | orchestrator | 2026-01-03 03:42:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:42:49.619838 | orchestrator | 2026-01-03 03:42:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:42:49.619925 | orchestrator | 2026-01-03 03:42:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:52.665010 | orchestrator | 2026-01-03 03:42:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:42:52.666669 | orchestrator | 2026-01-03 03:42:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:42:52.666728 | orchestrator | 2026-01-03 03:42:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:55.712649 | orchestrator | 2026-01-03 03:42:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:42:55.715190 | orchestrator | 2026-01-03 03:42:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:42:55.715437 | orchestrator | 2026-01-03 03:42:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:42:58.758870 | orchestrator | 2026-01-03 03:42:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:42:58.760579 | orchestrator | 2026-01-03 03:42:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:42:58.760654 | orchestrator | 2026-01-03 03:42:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:01.804701 | orchestrator | 2026-01-03 03:43:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:43:01.807739 | orchestrator | 2026-01-03 03:43:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:43:01.807972 | orchestrator | 2026-01-03 03:43:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:04.852850 | orchestrator | 2026-01-03 03:43:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:43:04.855159 | orchestrator | 2026-01-03 03:43:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:43:04.855761 | orchestrator | 2026-01-03 03:43:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:07.900332 | orchestrator | 2026-01-03 03:43:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:43:07.903064 | orchestrator | 2026-01-03 03:43:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:43:07.903111 | orchestrator | 2026-01-03 03:43:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:10.954160 | orchestrator | 2026-01-03 03:43:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:43:10.955520 | orchestrator | 2026-01-03 03:43:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:43:10.955585 | orchestrator | 2026-01-03 03:43:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:14.001697 | orchestrator | 2026-01-03 03:43:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:43:14.003557 | orchestrator | 2026-01-03 03:43:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:43:14.003622 | orchestrator | 2026-01-03 03:43:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:17.053059 | orchestrator | 2026-01-03 03:43:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:43:17.055209 | orchestrator | 2026-01-03 03:43:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:43:17.055310 | orchestrator | 2026-01-03 03:43:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:20.092786 | orchestrator | 2026-01-03 03:43:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:43:20.095180 | orchestrator | 2026-01-03 03:43:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:43:20.095247 | orchestrator | 2026-01-03 03:43:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:23.135179 | orchestrator | 2026-01-03 03:43:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:43:23.135829 | orchestrator | 2026-01-03 03:43:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:43:23.135926 | orchestrator | 2026-01-03 03:43:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:26.186314 | orchestrator | 2026-01-03 03:43:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:43:26.189168 | orchestrator | 2026-01-03 03:43:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:43:26.189284 | orchestrator | 2026-01-03 03:43:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:29.236214 | orchestrator | 2026-01-03 03:43:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:43:29.237328 | orchestrator | 2026-01-03 03:43:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:43:29.237361 | orchestrator | 2026-01-03 03:43:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:32.289425 | orchestrator | 2026-01-03 03:43:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:43:32.291204 | orchestrator | 2026-01-03 03:43:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:43:32.291402 | orchestrator | 2026-01-03 03:43:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:35.336263 | orchestrator | 2026-01-03 03:43:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:43:35.337805 | orchestrator | 2026-01-03 03:43:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:43:35.337984 | orchestrator | 2026-01-03 03:43:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:38.379259 | orchestrator | 2026-01-03 03:43:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:43:38.380798 | orchestrator | 2026-01-03 03:43:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:43:38.381140 | orchestrator | 2026-01-03 03:43:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:41.432177 | orchestrator | 2026-01-03 03:43:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:43:41.434582 | orchestrator | 2026-01-03 03:43:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:43:41.434643 | orchestrator | 2026-01-03 03:43:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:44.478115 | orchestrator | 2026-01-03 03:43:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:43:44.479183 | orchestrator | 2026-01-03 03:43:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:43:44.479493 | orchestrator | 2026-01-03 03:43:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:47.520099 | orchestrator | 2026-01-03 03:43:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:43:47.521708 | orchestrator | 2026-01-03 03:43:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:43:47.521759 | orchestrator | 2026-01-03 03:43:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:50.572787 | orchestrator | 2026-01-03 03:43:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:43:50.574160 | orchestrator | 2026-01-03 03:43:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:43:50.574494 | orchestrator | 2026-01-03 03:43:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:53.621346 | orchestrator | 2026-01-03 03:43:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:43:53.623581 | orchestrator | 2026-01-03 03:43:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:43:53.623723 | orchestrator | 2026-01-03 03:43:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:56.674878 | orchestrator | 2026-01-03 03:43:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:43:56.676144 | orchestrator | 2026-01-03 03:43:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:43:56.676331 | orchestrator | 2026-01-03 03:43:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:43:59.722860 | orchestrator | 2026-01-03 03:43:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:43:59.724563 | orchestrator | 2026-01-03 03:43:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:43:59.724632 | orchestrator | 2026-01-03 03:43:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:02.772767 | orchestrator | 2026-01-03 03:44:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:44:02.774071 | orchestrator | 2026-01-03 03:44:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:44:02.774089 | orchestrator | 2026-01-03 03:44:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:05.825917 | orchestrator | 2026-01-03 03:44:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:44:05.827652 | orchestrator | 2026-01-03 03:44:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:44:05.827692 | orchestrator | 2026-01-03 03:44:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:08.876888 | orchestrator | 2026-01-03 03:44:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:44:08.878476 | orchestrator | 2026-01-03 03:44:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:44:08.878569 | orchestrator | 2026-01-03 03:44:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:11.925631 | orchestrator | 2026-01-03 03:44:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:44:11.927072 | orchestrator | 2026-01-03 03:44:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:44:11.927146 | orchestrator | 2026-01-03 03:44:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:14.973329 | orchestrator | 2026-01-03 03:44:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:44:14.975728 | orchestrator | 2026-01-03 03:44:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:44:14.975814 | orchestrator | 2026-01-03 03:44:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:18.025383 | orchestrator | 2026-01-03 03:44:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:44:18.027374 | orchestrator | 2026-01-03 03:44:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:44:18.027438 | orchestrator | 2026-01-03 03:44:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:21.070113 | orchestrator | 2026-01-03 03:44:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:44:21.073095 | orchestrator | 2026-01-03 03:44:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:44:21.073195 | orchestrator | 2026-01-03 03:44:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:24.119674 | orchestrator | 2026-01-03 03:44:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:44:24.121303 | orchestrator | 2026-01-03 03:44:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:44:24.121382 | orchestrator | 2026-01-03 03:44:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:27.171340 | orchestrator | 2026-01-03 03:44:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:44:27.172907 | orchestrator | 2026-01-03 03:44:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:44:27.172949 | orchestrator | 2026-01-03 03:44:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:30.214797 | orchestrator | 2026-01-03 03:44:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:44:30.217949 | orchestrator | 2026-01-03 03:44:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:44:30.218002 | orchestrator | 2026-01-03 03:44:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:33.265661 | orchestrator | 2026-01-03 03:44:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:44:33.267219 | orchestrator | 2026-01-03 03:44:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:44:33.267299 | orchestrator | 2026-01-03 03:44:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:36.313284 | orchestrator | 2026-01-03 03:44:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:44:36.315333 | orchestrator | 2026-01-03 03:44:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:44:36.315409 | orchestrator | 2026-01-03 03:44:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:39.360744 | orchestrator | 2026-01-03 03:44:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:44:39.362302 | orchestrator | 2026-01-03 03:44:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:44:39.362463 | orchestrator | 2026-01-03 03:44:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:42.408572 | orchestrator | 2026-01-03 03:44:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:44:42.410231 | orchestrator | 2026-01-03 03:44:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:44:42.410274 | orchestrator | 2026-01-03 03:44:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:45.458265 | orchestrator | 2026-01-03 03:44:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:44:45.459008 | orchestrator | 2026-01-03 03:44:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:44:45.459222 | orchestrator | 2026-01-03 03:44:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:48.507489 | orchestrator | 2026-01-03 03:44:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:44:48.508869 | orchestrator | 2026-01-03 03:44:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:44:48.509002 | orchestrator | 2026-01-03 03:44:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:51.560314 | orchestrator | 2026-01-03 03:44:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:44:51.562924 | orchestrator | 2026-01-03 03:44:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:44:51.562986 | orchestrator | 2026-01-03 03:44:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:54.608793 | orchestrator | 2026-01-03 03:44:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:44:54.612479 | orchestrator | 2026-01-03 03:44:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:44:54.612590 | orchestrator | 2026-01-03 03:44:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:44:57.670631 | orchestrator | 2026-01-03 03:44:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:44:57.672340 | orchestrator | 2026-01-03 03:44:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:44:57.672448 | orchestrator | 2026-01-03 03:44:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:00.720987 | orchestrator | 2026-01-03 03:45:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:45:00.722180 | orchestrator | 2026-01-03 03:45:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:45:00.722232 | orchestrator | 2026-01-03 03:45:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:03.769137 | orchestrator | 2026-01-03 03:45:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:45:03.770177 | orchestrator | 2026-01-03 03:45:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:45:03.770242 | orchestrator | 2026-01-03 03:45:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:06.814275 | orchestrator | 2026-01-03 03:45:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:45:06.816442 | orchestrator | 2026-01-03 03:45:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:45:06.816715 | orchestrator | 2026-01-03 03:45:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:09.862727 | orchestrator | 2026-01-03 03:45:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:45:09.865963 | orchestrator | 2026-01-03 03:45:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:45:09.866628 | orchestrator | 2026-01-03 03:45:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:12.914959 | orchestrator | 2026-01-03 03:45:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:45:12.917401 | orchestrator | 2026-01-03 03:45:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:45:12.917471 | orchestrator | 2026-01-03 03:45:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:15.964046 | orchestrator | 2026-01-03 03:45:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:45:15.965078 | orchestrator | 2026-01-03 03:45:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:45:15.965143 | orchestrator | 2026-01-03 03:45:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:19.020086 | orchestrator | 2026-01-03 03:45:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:45:19.021801 | orchestrator | 2026-01-03 03:45:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:45:19.021875 | orchestrator | 2026-01-03 03:45:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:22.066776 | orchestrator | 2026-01-03 03:45:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:45:22.070557 | orchestrator | 2026-01-03 03:45:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:45:22.071170 | orchestrator | 2026-01-03 03:45:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:25.106972 | orchestrator | 2026-01-03 03:45:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:45:25.109175 | orchestrator | 2026-01-03 03:45:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:45:25.109246 | orchestrator | 2026-01-03 03:45:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:28.161263 | orchestrator | 2026-01-03 03:45:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:45:28.162658 | orchestrator | 2026-01-03 03:45:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:45:28.162712 | orchestrator | 2026-01-03 03:45:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:31.213536 | orchestrator | 2026-01-03 03:45:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:45:31.215506 | orchestrator | 2026-01-03 03:45:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:45:31.215579 | orchestrator | 2026-01-03 03:45:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:34.266287 | orchestrator | 2026-01-03 03:45:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:45:34.266797 | orchestrator | 2026-01-03 03:45:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:45:34.266848 | orchestrator | 2026-01-03 03:45:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:37.315693 | orchestrator | 2026-01-03 03:45:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:45:37.316453 | orchestrator | 2026-01-03 03:45:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:45:37.316487 | orchestrator | 2026-01-03 03:45:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:40.365764 | orchestrator | 2026-01-03 03:45:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:45:40.367663 | orchestrator | 2026-01-03 03:45:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:45:40.367741 | orchestrator | 2026-01-03 03:45:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:43.420480 | orchestrator | 2026-01-03 03:45:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:45:43.422166 | orchestrator | 2026-01-03 03:45:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:45:43.422226 | orchestrator | 2026-01-03 03:45:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:46.462724 | orchestrator | 2026-01-03 03:45:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:45:46.464180 | orchestrator | 2026-01-03 03:45:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:45:46.464224 | orchestrator | 2026-01-03 03:45:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:49.519517 | orchestrator | 2026-01-03 03:45:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:45:49.521640 | orchestrator | 2026-01-03 03:45:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:45:49.521727 | orchestrator | 2026-01-03 03:45:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:52.571135 | orchestrator | 2026-01-03 03:45:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:45:52.573369 | orchestrator | 2026-01-03 03:45:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:45:52.573458 | orchestrator | 2026-01-03 03:45:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:55.624821 | orchestrator | 2026-01-03 03:45:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:45:55.625855 | orchestrator | 2026-01-03 03:45:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:45:55.626084 | orchestrator | 2026-01-03 03:45:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:45:58.676095 | orchestrator | 2026-01-03 03:45:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:45:58.677021 | orchestrator | 2026-01-03 03:45:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:45:58.677053 | orchestrator | 2026-01-03 03:45:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:01.718004 | orchestrator | 2026-01-03 03:46:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:46:01.719117 | orchestrator | 2026-01-03 03:46:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:46:01.719154 | orchestrator | 2026-01-03 03:46:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:04.763442 | orchestrator | 2026-01-03 03:46:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:46:04.765381 | orchestrator | 2026-01-03 03:46:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:46:04.765423 | orchestrator | 2026-01-03 03:46:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:07.815536 | orchestrator | 2026-01-03 03:46:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:46:07.818813 | orchestrator | 2026-01-03 03:46:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:46:07.818926 | orchestrator | 2026-01-03 03:46:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:10.869168 | orchestrator | 2026-01-03 03:46:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:46:10.871902 | orchestrator | 2026-01-03 03:46:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:46:10.871958 | orchestrator | 2026-01-03 03:46:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:13.933113 | orchestrator | 2026-01-03 03:46:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:46:13.935048 | orchestrator | 2026-01-03 03:46:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:46:13.935114 | orchestrator | 2026-01-03 03:46:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:16.986225 | orchestrator | 2026-01-03 03:46:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:46:16.987217 | orchestrator | 2026-01-03 03:46:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:46:16.987257 | orchestrator | 2026-01-03 03:46:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:20.043037 | orchestrator | 2026-01-03 03:46:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:46:20.045923 | orchestrator | 2026-01-03 03:46:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:46:20.045964 | orchestrator | 2026-01-03 03:46:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:23.092840 | orchestrator | 2026-01-03 03:46:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:46:23.094190 | orchestrator | 2026-01-03 03:46:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:46:23.094826 | orchestrator | 2026-01-03 03:46:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:26.141008 | orchestrator | 2026-01-03 03:46:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:46:26.143235 | orchestrator | 2026-01-03 03:46:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:46:26.143723 | orchestrator | 2026-01-03 03:46:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:29.198798 | orchestrator | 2026-01-03 03:46:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:46:29.201346 | orchestrator | 2026-01-03 03:46:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:46:29.201394 | orchestrator | 2026-01-03 03:46:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:32.248413 | orchestrator | 2026-01-03 03:46:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:46:32.251791 | orchestrator | 2026-01-03 03:46:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:46:32.251885 | orchestrator | 2026-01-03 03:46:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:35.301803 | orchestrator | 2026-01-03 03:46:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:46:35.304354 | orchestrator | 2026-01-03 03:46:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:46:35.304422 | orchestrator | 2026-01-03 03:46:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:38.355138 | orchestrator | 2026-01-03 03:46:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:46:38.356869 | orchestrator | 2026-01-03 03:46:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:46:38.356975 | orchestrator | 2026-01-03 03:46:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:41.409041 | orchestrator | 2026-01-03 03:46:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:46:41.412115 | orchestrator | 2026-01-03 03:46:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:46:41.412214 | orchestrator | 2026-01-03 03:46:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:44.463120 | orchestrator | 2026-01-03 03:46:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:46:44.465821 | orchestrator | 2026-01-03 03:46:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:46:44.465919 | orchestrator | 2026-01-03 03:46:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:47.517704 | orchestrator | 2026-01-03 03:46:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:46:47.519107 | orchestrator | 2026-01-03 03:46:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:46:47.519175 | orchestrator | 2026-01-03 03:46:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:50.571425 | orchestrator | 2026-01-03 03:46:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:46:50.573720 | orchestrator | 2026-01-03 03:46:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:46:50.573807 | orchestrator | 2026-01-03 03:46:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:53.625406 | orchestrator | 2026-01-03 03:46:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:46:53.627279 | orchestrator | 2026-01-03 03:46:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:46:53.627340 | orchestrator | 2026-01-03 03:46:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:56.677136 | orchestrator | 2026-01-03 03:46:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:46:56.678929 | orchestrator | 2026-01-03 03:46:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:46:56.678990 | orchestrator | 2026-01-03 03:46:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:46:59.730802 | orchestrator | 2026-01-03 03:46:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:46:59.733425 | orchestrator | 2026-01-03 03:46:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:46:59.733485 | orchestrator | 2026-01-03 03:46:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:02.789721 | orchestrator | 2026-01-03 03:47:02[0m | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:47:02.793147 | orchestrator | 2026-01-03 03:47:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:47:02.793221 | orchestrator | 2026-01-03 03:47:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:05.834180 | orchestrator | 2026-01-03 03:47:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:47:05.835935 | orchestrator | 2026-01-03 03:47:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:47:05.835987 | orchestrator | 2026-01-03 03:47:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:08.883445 | orchestrator | 2026-01-03 03:47:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:47:08.885802 | orchestrator | 2026-01-03 03:47:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:47:08.885845 | orchestrator | 2026-01-03 03:47:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:11.933859 | orchestrator | 2026-01-03 03:47:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:47:11.935511 | orchestrator | 2026-01-03 03:47:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:47:11.935621 | orchestrator | 2026-01-03 03:47:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:14.979207 | orchestrator | 2026-01-03 03:47:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:47:14.981995 | orchestrator | 2026-01-03 03:47:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:47:14.982119 | orchestrator | 2026-01-03 03:47:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:18.032290 | orchestrator | 2026-01-03 03:47:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:47:18.033550 | orchestrator | 2026-01-03 03:47:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:47:18.033592 | orchestrator | 2026-01-03 03:47:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:21.081817 | orchestrator | 2026-01-03 03:47:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:47:21.084487 | orchestrator | 2026-01-03 03:47:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:47:21.084533 | orchestrator | 2026-01-03 03:47:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:24.131932 | orchestrator | 2026-01-03 03:47:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:47:24.133905 | orchestrator | 2026-01-03 03:47:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:47:24.134343 | orchestrator | 2026-01-03 03:47:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:27.178061 | orchestrator | 2026-01-03 03:47:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:47:27.179705 | orchestrator | 2026-01-03 03:47:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:47:27.179747 | orchestrator | 2026-01-03 03:47:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:30.227933 | orchestrator | 2026-01-03 03:47:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:47:30.229215 | orchestrator | 2026-01-03 03:47:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:47:30.229263 | orchestrator | 2026-01-03 03:47:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:33.276223 | orchestrator | 2026-01-03 03:47:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:47:33.277833 | orchestrator | 2026-01-03 03:47:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:47:33.277886 | orchestrator | 2026-01-03 03:47:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:36.320657 | orchestrator | 2026-01-03 03:47:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:47:36.321751 | orchestrator | 2026-01-03 03:47:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:47:36.321870 | orchestrator | 2026-01-03 03:47:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:39.368918 | orchestrator | 2026-01-03 03:47:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:47:39.370231 | orchestrator | 2026-01-03 03:47:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:47:39.370273 | orchestrator | 2026-01-03 03:47:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:42.410656 | orchestrator | 2026-01-03 03:47:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:47:42.412743 | orchestrator | 2026-01-03 03:47:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:47:42.412790 | orchestrator | 2026-01-03 03:47:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:45.459043 | orchestrator | 2026-01-03 03:47:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:47:45.460408 | orchestrator | 2026-01-03 03:47:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:47:45.460487 | orchestrator | 2026-01-03 03:47:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:48.512847 | orchestrator | 2026-01-03 03:47:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:47:48.514354 | orchestrator | 2026-01-03 03:47:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:47:48.514483 | orchestrator | 2026-01-03 03:47:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:51.554208 | orchestrator | 2026-01-03 03:47:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:47:51.556416 | orchestrator | 2026-01-03 03:47:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:47:51.556577 | orchestrator | 2026-01-03 03:47:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:54.600149 | orchestrator | 2026-01-03 03:47:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:47:54.601635 | orchestrator | 2026-01-03 03:47:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:47:54.601690 | orchestrator | 2026-01-03 03:47:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:47:57.646838 | orchestrator | 2026-01-03 03:47:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:47:57.649068 | orchestrator | 2026-01-03 03:47:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:47:57.649297 | orchestrator | 2026-01-03 03:47:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:00.692758 | orchestrator | 2026-01-03 03:48:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:48:00.694589 | orchestrator | 2026-01-03 03:48:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:48:00.694674 | orchestrator | 2026-01-03 03:48:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:03.746294 | orchestrator | 2026-01-03 03:48:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:48:03.747777 | orchestrator | 2026-01-03 03:48:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:48:03.747813 | orchestrator | 2026-01-03 03:48:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:06.793216 | orchestrator | 2026-01-03 03:48:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:48:06.795718 | orchestrator | 2026-01-03 03:48:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:48:06.795795 | orchestrator | 2026-01-03 03:48:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:09.842250 | orchestrator | 2026-01-03 03:48:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:48:09.843575 | orchestrator | 2026-01-03 03:48:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:48:09.843631 | orchestrator | 2026-01-03 03:48:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:12.894787 | orchestrator | 2026-01-03 03:48:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:48:12.896697 | orchestrator | 2026-01-03 03:48:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:48:12.896751 | orchestrator | 2026-01-03 03:48:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:15.943586 | orchestrator | 2026-01-03 03:48:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:48:15.944879 | orchestrator | 2026-01-03 03:48:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:48:15.945339 | orchestrator | 2026-01-03 03:48:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:18.991637 | orchestrator | 2026-01-03 03:48:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:48:18.992880 | orchestrator | 2026-01-03 03:48:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:48:18.993007 | orchestrator | 2026-01-03 03:48:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:22.035773 | orchestrator | 2026-01-03 03:48:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:48:22.038130 | orchestrator | 2026-01-03 03:48:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:48:22.038192 | orchestrator | 2026-01-03 03:48:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:25.085654 | orchestrator | 2026-01-03 03:48:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:48:25.087188 | orchestrator | 2026-01-03 03:48:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:48:25.087235 | orchestrator | 2026-01-03 03:48:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:28.129788 | orchestrator | 2026-01-03 03:48:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:48:28.131954 | orchestrator | 2026-01-03 03:48:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:48:28.132087 | orchestrator | 2026-01-03 03:48:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:31.181201 | orchestrator | 2026-01-03 03:48:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:48:31.183489 | orchestrator | 2026-01-03 03:48:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:48:31.183573 | orchestrator | 2026-01-03 03:48:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:34.232976 | orchestrator | 2026-01-03 03:48:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:48:34.234210 | orchestrator | 2026-01-03 03:48:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:48:34.234400 | orchestrator | 2026-01-03 03:48:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:37.289853 | orchestrator | 2026-01-03 03:48:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:48:37.291193 | orchestrator | 2026-01-03 03:48:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:48:37.291313 | orchestrator | 2026-01-03 03:48:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:40.341404 | orchestrator | 2026-01-03 03:48:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:48:40.345373 | orchestrator | 2026-01-03 03:48:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:48:40.345458 | orchestrator | 2026-01-03 03:48:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:43.398106 | orchestrator | 2026-01-03 03:48:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:48:43.399716 | orchestrator | 2026-01-03 03:48:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:48:43.399759 | orchestrator | 2026-01-03 03:48:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:46.442387 | orchestrator | 2026-01-03 03:48:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:48:46.443730 | orchestrator | 2026-01-03 03:48:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:48:46.443768 | orchestrator | 2026-01-03 03:48:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:49.488757 | orchestrator | 2026-01-03 03:48:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:48:49.490282 | orchestrator | 2026-01-03 03:48:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:48:49.490339 | orchestrator | 2026-01-03 03:48:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:52.541866 | orchestrator | 2026-01-03 03:48:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:48:52.542874 | orchestrator | 2026-01-03 03:48:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:48:52.543119 | orchestrator | 2026-01-03 03:48:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:55.587682 | orchestrator | 2026-01-03 03:48:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:48:55.590207 | orchestrator | 2026-01-03 03:48:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:48:55.590264 | orchestrator | 2026-01-03 03:48:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:48:58.634243 | orchestrator | 2026-01-03 03:48:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:48:58.636364 | orchestrator | 2026-01-03 03:48:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:48:58.636437 | orchestrator | 2026-01-03 03:48:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:01.681965 | orchestrator | 2026-01-03 03:49:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:49:01.684564 | orchestrator | 2026-01-03 03:49:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:49:01.684677 | orchestrator | 2026-01-03 03:49:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:04.736766 | orchestrator | 2026-01-03 03:49:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:49:04.737500 | orchestrator | 2026-01-03 03:49:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:49:04.737535 | orchestrator | 2026-01-03 03:49:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:07.787972 | orchestrator | 2026-01-03 03:49:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:49:07.789734 | orchestrator | 2026-01-03 03:49:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:49:07.789789 | orchestrator | 2026-01-03 03:49:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:10.829111 | orchestrator | 2026-01-03 03:49:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:49:10.831242 | orchestrator | 2026-01-03 03:49:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:49:10.831287 | orchestrator | 2026-01-03 03:49:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:13.883893 | orchestrator | 2026-01-03 03:49:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:49:13.885698 | orchestrator | 2026-01-03 03:49:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:49:13.885742 | orchestrator | 2026-01-03 03:49:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:16.932072 | orchestrator | 2026-01-03 03:49:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:49:16.934185 | orchestrator | 2026-01-03 03:49:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:49:16.934267 | orchestrator | 2026-01-03 03:49:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:19.983405 | orchestrator | 2026-01-03 03:49:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:49:19.985362 | orchestrator | 2026-01-03 03:49:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:49:19.985419 | orchestrator | 2026-01-03 03:49:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:23.032341 | orchestrator | 2026-01-03 03:49:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:49:23.034510 | orchestrator | 2026-01-03 03:49:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:49:23.034556 | orchestrator | 2026-01-03 03:49:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:26.077629 | orchestrator | 2026-01-03 03:49:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:49:26.079553 | orchestrator | 2026-01-03 03:49:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:49:26.079612 | orchestrator | 2026-01-03 03:49:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:29.123126 | orchestrator | 2026-01-03 03:49:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:49:29.124623 | orchestrator | 2026-01-03 03:49:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:49:29.124674 | orchestrator | 2026-01-03 03:49:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:32.177231 | orchestrator | 2026-01-03 03:49:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:49:32.179611 | orchestrator | 2026-01-03 03:49:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:49:32.179690 | orchestrator | 2026-01-03 03:49:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:35.230896 | orchestrator | 2026-01-03 03:49:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:49:35.234867 | orchestrator | 2026-01-03 03:49:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:49:35.235058 | orchestrator | 2026-01-03 03:49:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:38.288628 | orchestrator | 2026-01-03 03:49:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:49:38.290986 | orchestrator | 2026-01-03 03:49:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:49:38.291038 | orchestrator | 2026-01-03 03:49:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:41.343857 | orchestrator | 2026-01-03 03:49:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:49:41.345941 | orchestrator | 2026-01-03 03:49:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:49:41.345990 | orchestrator | 2026-01-03 03:49:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:44.399593 | orchestrator | 2026-01-03 03:49:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:49:44.400364 | orchestrator | 2026-01-03 03:49:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:49:44.400669 | orchestrator | 2026-01-03 03:49:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:47.448186 | orchestrator | 2026-01-03 03:49:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:49:47.454449 | orchestrator | 2026-01-03 03:49:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:49:47.454538 | orchestrator | 2026-01-03 03:49:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:50.501101 | orchestrator | 2026-01-03 03:49:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:49:50.502557 | orchestrator | 2026-01-03 03:49:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:49:50.502603 | orchestrator | 2026-01-03 03:49:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:53.553888 | orchestrator | 2026-01-03 03:49:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:49:53.555432 | orchestrator | 2026-01-03 03:49:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:49:53.555477 | orchestrator | 2026-01-03 03:49:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:56.610341 | orchestrator | 2026-01-03 03:49:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:49:56.612771 | orchestrator | 2026-01-03 03:49:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:49:56.612886 | orchestrator | 2026-01-03 03:49:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:49:59.656906 | orchestrator | 2026-01-03 03:49:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:49:59.659086 | orchestrator | 2026-01-03 03:49:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:49:59.659132 | orchestrator | 2026-01-03 03:49:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:02.707115 | orchestrator | 2026-01-03 03:50:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:50:02.708231 | orchestrator | 2026-01-03 03:50:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:50:02.708276 | orchestrator | 2026-01-03 03:50:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:05.758654 | orchestrator | 2026-01-03 03:50:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:50:05.760857 | orchestrator | 2026-01-03 03:50:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:50:05.760920 | orchestrator | 2026-01-03 03:50:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:08.807181 | orchestrator | 2026-01-03 03:50:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:50:08.808926 | orchestrator | 2026-01-03 03:50:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:50:08.809102 | orchestrator | 2026-01-03 03:50:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:11.857342 | orchestrator | 2026-01-03 03:50:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:50:11.858783 | orchestrator | 2026-01-03 03:50:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:50:11.858824 | orchestrator | 2026-01-03 03:50:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:14.904543 | orchestrator | 2026-01-03 03:50:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:50:14.906851 | orchestrator | 2026-01-03 03:50:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:50:14.906953 | orchestrator | 2026-01-03 03:50:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:17.955781 | orchestrator | 2026-01-03 03:50:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:50:17.957325 | orchestrator | 2026-01-03 03:50:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:50:17.957423 | orchestrator | 2026-01-03 03:50:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:21.004252 | orchestrator | 2026-01-03 03:50:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:50:21.005832 | orchestrator | 2026-01-03 03:50:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:50:21.005907 | orchestrator | 2026-01-03 03:50:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:24.051436 | orchestrator | 2026-01-03 03:50:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:50:24.051969 | orchestrator | 2026-01-03 03:50:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:50:24.052218 | orchestrator | 2026-01-03 03:50:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:27.101320 | orchestrator | 2026-01-03 03:50:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:50:27.103328 | orchestrator | 2026-01-03 03:50:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:50:27.103381 | orchestrator | 2026-01-03 03:50:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:30.146330 | orchestrator | 2026-01-03 03:50:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:50:30.147797 | orchestrator | 2026-01-03 03:50:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:50:30.147847 | orchestrator | 2026-01-03 03:50:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:33.197474 | orchestrator | 2026-01-03 03:50:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:50:33.199001 | orchestrator | 2026-01-03 03:50:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:50:33.199088 | orchestrator | 2026-01-03 03:50:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:36.244214 | orchestrator | 2026-01-03 03:50:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:50:36.245729 | orchestrator | 2026-01-03 03:50:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:50:36.245779 | orchestrator | 2026-01-03 03:50:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:39.290653 | orchestrator | 2026-01-03 03:50:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:50:39.292288 | orchestrator | 2026-01-03 03:50:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:50:39.292645 | orchestrator | 2026-01-03 03:50:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:42.335209 | orchestrator | 2026-01-03 03:50:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:50:42.336275 | orchestrator | 2026-01-03 03:50:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:50:42.336340 | orchestrator | 2026-01-03 03:50:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:45.387163 | orchestrator | 2026-01-03 03:50:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:50:45.389313 | orchestrator | 2026-01-03 03:50:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:50:45.389400 | orchestrator | 2026-01-03 03:50:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:48.436917 | orchestrator | 2026-01-03 03:50:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:50:48.437698 | orchestrator | 2026-01-03 03:50:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:50:48.437734 | orchestrator | 2026-01-03 03:50:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:51.484369 | orchestrator | 2026-01-03 03:50:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:50:51.485840 | orchestrator | 2026-01-03 03:50:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:50:51.486000 | orchestrator | 2026-01-03 03:50:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:54.536729 | orchestrator | 2026-01-03 03:50:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:50:54.538626 | orchestrator | 2026-01-03 03:50:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:50:54.538689 | orchestrator | 2026-01-03 03:50:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:50:57.591904 | orchestrator | 2026-01-03 03:50:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:50:57.594712 | orchestrator | 2026-01-03 03:50:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:50:57.594766 | orchestrator | 2026-01-03 03:50:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:00.647147 | orchestrator | 2026-01-03 03:51:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:51:00.649224 | orchestrator | 2026-01-03 03:51:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:51:00.649296 | orchestrator | 2026-01-03 03:51:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:03.689828 | orchestrator | 2026-01-03 03:51:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:51:03.691683 | orchestrator | 2026-01-03 03:51:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:51:03.691741 | orchestrator | 2026-01-03 03:51:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:06.734234 | orchestrator | 2026-01-03 03:51:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:51:06.735720 | orchestrator | 2026-01-03 03:51:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:51:06.735840 | orchestrator | 2026-01-03 03:51:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:09.785506 | orchestrator | 2026-01-03 03:51:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:51:09.786907 | orchestrator | 2026-01-03 03:51:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:51:09.787008 | orchestrator | 2026-01-03 03:51:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:12.836305 | orchestrator | 2026-01-03 03:51:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:51:12.838243 | orchestrator | 2026-01-03 03:51:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:51:12.838274 | orchestrator | 2026-01-03 03:51:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:15.884180 | orchestrator | 2026-01-03 03:51:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:51:15.885754 | orchestrator | 2026-01-03 03:51:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:51:15.885906 | orchestrator | 2026-01-03 03:51:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:18.931442 | orchestrator | 2026-01-03 03:51:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:51:18.932503 | orchestrator | 2026-01-03 03:51:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:51:18.932554 | orchestrator | 2026-01-03 03:51:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:21.977524 | orchestrator | 2026-01-03 03:51:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:51:21.979797 | orchestrator | 2026-01-03 03:51:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:51:21.979844 | orchestrator | 2026-01-03 03:51:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:25.029665 | orchestrator | 2026-01-03 03:51:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:51:25.030989 | orchestrator | 2026-01-03 03:51:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:51:25.031027 | orchestrator | 2026-01-03 03:51:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:28.080674 | orchestrator | 2026-01-03 03:51:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:51:28.082145 | orchestrator | 2026-01-03 03:51:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:51:28.082194 | orchestrator | 2026-01-03 03:51:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:31.127068 | orchestrator | 2026-01-03 03:51:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:51:31.129034 | orchestrator | 2026-01-03 03:51:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:51:31.129116 | orchestrator | 2026-01-03 03:51:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:34.181624 | orchestrator | 2026-01-03 03:51:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:51:34.183208 | orchestrator | 2026-01-03 03:51:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:51:34.183243 | orchestrator | 2026-01-03 03:51:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:37.232399 | orchestrator | 2026-01-03 03:51:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:51:37.234076 | orchestrator | 2026-01-03 03:51:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:51:37.234313 | orchestrator | 2026-01-03 03:51:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:40.282177 | orchestrator | 2026-01-03 03:51:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:51:40.283805 | orchestrator | 2026-01-03 03:51:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:51:40.283851 | orchestrator | 2026-01-03 03:51:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:43.329641 | orchestrator | 2026-01-03 03:51:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:51:43.330458 | orchestrator | 2026-01-03 03:51:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:51:43.330481 | orchestrator | 2026-01-03 03:51:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:46.372342 | orchestrator | 2026-01-03 03:51:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:51:46.373978 | orchestrator | 2026-01-03 03:51:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:51:46.373999 | orchestrator | 2026-01-03 03:51:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:49.416909 | orchestrator | 2026-01-03 03:51:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:51:49.418959 | orchestrator | 2026-01-03 03:51:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:51:49.419114 | orchestrator | 2026-01-03 03:51:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:52.464032 | orchestrator | 2026-01-03 03:51:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:51:52.465370 | orchestrator | 2026-01-03 03:51:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:51:52.465417 | orchestrator | 2026-01-03 03:51:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:55.514894 | orchestrator | 2026-01-03 03:51:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:51:55.517248 | orchestrator | 2026-01-03 03:51:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:51:55.517293 | orchestrator | 2026-01-03 03:51:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:51:58.570259 | orchestrator | 2026-01-03 03:51:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:51:58.571402 | orchestrator | 2026-01-03 03:51:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:51:58.571429 | orchestrator | 2026-01-03 03:51:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:01.626677 | orchestrator | 2026-01-03 03:52:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:52:01.628846 | orchestrator | 2026-01-03 03:52:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:52:01.628985 | orchestrator | 2026-01-03 03:52:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:04.674961 | orchestrator | 2026-01-03 03:52:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:52:04.679037 | orchestrator | 2026-01-03 03:52:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:52:04.679115 | orchestrator | 2026-01-03 03:52:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:07.732935 | orchestrator | 2026-01-03 03:52:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:52:07.734831 | orchestrator | 2026-01-03 03:52:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:52:07.734925 | orchestrator | 2026-01-03 03:52:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:10.780205 | orchestrator | 2026-01-03 03:52:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:52:10.782345 | orchestrator | 2026-01-03 03:52:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:52:10.782393 | orchestrator | 2026-01-03 03:52:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:13.831176 | orchestrator | 2026-01-03 03:52:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:52:13.832953 | orchestrator | 2026-01-03 03:52:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:52:13.832977 | orchestrator | 2026-01-03 03:52:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:16.877872 | orchestrator | 2026-01-03 03:52:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:52:16.879296 | orchestrator | 2026-01-03 03:52:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:52:16.879386 | orchestrator | 2026-01-03 03:52:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:19.919571 | orchestrator | 2026-01-03 03:52:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:52:19.920995 | orchestrator | 2026-01-03 03:52:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:52:19.921087 | orchestrator | 2026-01-03 03:52:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:22.973542 | orchestrator | 2026-01-03 03:52:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:52:22.975594 | orchestrator | 2026-01-03 03:52:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:52:22.975634 | orchestrator | 2026-01-03 03:52:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:26.026900 | orchestrator | 2026-01-03 03:52:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:52:26.028922 | orchestrator | 2026-01-03 03:52:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:52:26.029146 | orchestrator | 2026-01-03 03:52:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:29.074681 | orchestrator | 2026-01-03 03:52:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:52:29.075635 | orchestrator | 2026-01-03 03:52:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:52:29.075656 | orchestrator | 2026-01-03 03:52:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:32.120628 | orchestrator | 2026-01-03 03:52:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:52:32.122124 | orchestrator | 2026-01-03 03:52:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:52:32.122185 | orchestrator | 2026-01-03 03:52:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:35.174936 | orchestrator | 2026-01-03 03:52:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:52:35.177611 | orchestrator | 2026-01-03 03:52:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:52:35.177665 | orchestrator | 2026-01-03 03:52:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:38.225891 | orchestrator | 2026-01-03 03:52:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:52:38.227470 | orchestrator | 2026-01-03 03:52:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:52:38.227540 | orchestrator | 2026-01-03 03:52:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:41.279621 | orchestrator | 2026-01-03 03:52:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:52:41.282453 | orchestrator | 2026-01-03 03:52:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:52:41.282494 | orchestrator | 2026-01-03 03:52:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:44.334291 | orchestrator | 2026-01-03 03:52:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:52:44.335606 | orchestrator | 2026-01-03 03:52:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:52:44.335660 | orchestrator | 2026-01-03 03:52:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:47.378911 | orchestrator | 2026-01-03 03:52:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:52:47.380578 | orchestrator | 2026-01-03 03:52:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:52:47.380624 | orchestrator | 2026-01-03 03:52:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:50.427654 | orchestrator | 2026-01-03 03:52:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:52:50.428278 | orchestrator | 2026-01-03 03:52:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:52:50.428338 | orchestrator | 2026-01-03 03:52:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:53.480705 | orchestrator | 2026-01-03 03:52:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:52:53.482714 | orchestrator | 2026-01-03 03:52:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:52:53.482780 | orchestrator | 2026-01-03 03:52:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:56.530336 | orchestrator | 2026-01-03 03:52:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:52:56.531952 | orchestrator | 2026-01-03 03:52:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:52:56.532090 | orchestrator | 2026-01-03 03:52:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:52:59.579215 | orchestrator | 2026-01-03 03:52:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:52:59.581006 | orchestrator | 2026-01-03 03:52:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:52:59.581238 | orchestrator | 2026-01-03 03:52:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:02.620780 | orchestrator | 2026-01-03 03:53:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:53:02.623243 | orchestrator | 2026-01-03 03:53:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:53:02.623293 | orchestrator | 2026-01-03 03:53:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:05.675685 | orchestrator | 2026-01-03 03:53:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:53:05.677937 | orchestrator | 2026-01-03 03:53:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:53:05.678406 | orchestrator | 2026-01-03 03:53:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:08.728698 | orchestrator | 2026-01-03 03:53:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:53:08.731089 | orchestrator | 2026-01-03 03:53:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:53:08.731152 | orchestrator | 2026-01-03 03:53:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:11.782866 | orchestrator | 2026-01-03 03:53:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:53:11.785017 | orchestrator | 2026-01-03 03:53:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:53:11.785110 | orchestrator | 2026-01-03 03:53:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:14.835142 | orchestrator | 2026-01-03 03:53:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:53:14.837686 | orchestrator | 2026-01-03 03:53:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:53:14.837750 | orchestrator | 2026-01-03 03:53:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:17.886332 | orchestrator | 2026-01-03 03:53:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:53:17.889563 | orchestrator | 2026-01-03 03:53:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:53:17.889716 | orchestrator | 2026-01-03 03:53:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:20.941091 | orchestrator | 2026-01-03 03:53:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:53:20.943332 | orchestrator | 2026-01-03 03:53:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:53:20.943607 | orchestrator | 2026-01-03 03:53:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:23.992791 | orchestrator | 2026-01-03 03:53:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:53:23.996840 | orchestrator | 2026-01-03 03:53:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:53:23.997085 | orchestrator | 2026-01-03 03:53:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:27.044730 | orchestrator | 2026-01-03 03:53:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:53:27.046307 | orchestrator | 2026-01-03 03:53:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:53:27.046375 | orchestrator | 2026-01-03 03:53:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:30.094829 | orchestrator | 2026-01-03 03:53:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:53:30.097749 | orchestrator | 2026-01-03 03:53:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:53:30.097798 | orchestrator | 2026-01-03 03:53:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:33.143745 | orchestrator | 2026-01-03 03:53:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:53:33.145074 | orchestrator | 2026-01-03 03:53:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:53:33.145091 | orchestrator | 2026-01-03 03:53:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:36.196040 | orchestrator | 2026-01-03 03:53:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:53:36.197297 | orchestrator | 2026-01-03 03:53:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:53:36.197404 | orchestrator | 2026-01-03 03:53:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:39.247408 | orchestrator | 2026-01-03 03:53:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:53:39.248523 | orchestrator | 2026-01-03 03:53:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:53:39.248543 | orchestrator | 2026-01-03 03:53:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:42.286460 | orchestrator | 2026-01-03 03:53:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:53:42.288055 | orchestrator | 2026-01-03 03:53:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:53:42.288093 | orchestrator | 2026-01-03 03:53:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:45.339762 | orchestrator | 2026-01-03 03:53:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:53:45.341479 | orchestrator | 2026-01-03 03:53:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:53:45.341607 | orchestrator | 2026-01-03 03:53:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:48.383079 | orchestrator | 2026-01-03 03:53:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:53:48.384966 | orchestrator | 2026-01-03 03:53:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:53:48.385124 | orchestrator | 2026-01-03 03:53:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:51.423614 | orchestrator | 2026-01-03 03:53:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:53:51.425819 | orchestrator | 2026-01-03 03:53:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:53:51.425952 | orchestrator | 2026-01-03 03:53:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:54.473089 | orchestrator | 2026-01-03 03:53:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:53:54.475093 | orchestrator | 2026-01-03 03:53:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:53:54.475237 | orchestrator | 2026-01-03 03:53:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:53:57.518199 | orchestrator | 2026-01-03 03:53:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:53:57.519901 | orchestrator | 2026-01-03 03:53:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:53:57.519935 | orchestrator | 2026-01-03 03:53:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:00.570695 | orchestrator | 2026-01-03 03:54:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:54:00.572190 | orchestrator | 2026-01-03 03:54:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:54:00.572224 | orchestrator | 2026-01-03 03:54:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:03.617313 | orchestrator | 2026-01-03 03:54:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:54:03.618697 | orchestrator | 2026-01-03 03:54:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:54:03.618731 | orchestrator | 2026-01-03 03:54:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:06.661850 | orchestrator | 2026-01-03 03:54:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:54:06.663811 | orchestrator | 2026-01-03 03:54:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:54:06.663884 | orchestrator | 2026-01-03 03:54:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:09.716729 | orchestrator | 2026-01-03 03:54:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:54:09.719201 | orchestrator | 2026-01-03 03:54:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:54:09.719414 | orchestrator | 2026-01-03 03:54:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:12.768154 | orchestrator | 2026-01-03 03:54:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:54:12.769361 | orchestrator | 2026-01-03 03:54:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:54:12.769411 | orchestrator | 2026-01-03 03:54:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:15.819699 | orchestrator | 2026-01-03 03:54:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:54:15.822119 | orchestrator | 2026-01-03 03:54:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:54:15.822260 | orchestrator | 2026-01-03 03:54:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:18.866605 | orchestrator | 2026-01-03 03:54:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:54:18.868045 | orchestrator | 2026-01-03 03:54:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:54:18.868097 | orchestrator | 2026-01-03 03:54:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:21.919517 | orchestrator | 2026-01-03 03:54:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:54:21.922239 | orchestrator | 2026-01-03 03:54:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:54:21.922334 | orchestrator | 2026-01-03 03:54:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:24.970465 | orchestrator | 2026-01-03 03:54:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:54:24.972019 | orchestrator | 2026-01-03 03:54:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:54:24.972075 | orchestrator | 2026-01-03 03:54:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:28.019925 | orchestrator | 2026-01-03 03:54:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:54:28.021618 | orchestrator | 2026-01-03 03:54:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:54:28.021696 | orchestrator | 2026-01-03 03:54:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:31.073509 | orchestrator | 2026-01-03 03:54:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:54:31.074928 | orchestrator | 2026-01-03 03:54:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:54:31.075080 | orchestrator | 2026-01-03 03:54:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:34.118512 | orchestrator | 2026-01-03 03:54:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:54:34.120374 | orchestrator | 2026-01-03 03:54:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:54:34.120412 | orchestrator | 2026-01-03 03:54:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:37.165478 | orchestrator | 2026-01-03 03:54:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:54:37.168059 | orchestrator | 2026-01-03 03:54:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:54:37.168121 | orchestrator | 2026-01-03 03:54:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:40.203096 | orchestrator | 2026-01-03 03:54:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:54:40.204003 | orchestrator | 2026-01-03 03:54:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:54:40.204230 | orchestrator | 2026-01-03 03:54:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:43.249836 | orchestrator | 2026-01-03 03:54:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:54:43.250763 | orchestrator | 2026-01-03 03:54:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:54:43.250798 | orchestrator | 2026-01-03 03:54:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:46.302264 | orchestrator | 2026-01-03 03:54:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:54:46.303570 | orchestrator | 2026-01-03 03:54:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:54:46.303638 | orchestrator | 2026-01-03 03:54:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:49.347145 | orchestrator | 2026-01-03 03:54:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:54:49.349566 | orchestrator | 2026-01-03 03:54:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:54:49.349774 | orchestrator | 2026-01-03 03:54:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:52.393462 | orchestrator | 2026-01-03 03:54:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:54:52.394837 | orchestrator | 2026-01-03 03:54:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:54:52.394902 | orchestrator | 2026-01-03 03:54:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:55.439646 | orchestrator | 2026-01-03 03:54:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:54:55.441072 | orchestrator | 2026-01-03 03:54:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:54:55.441121 | orchestrator | 2026-01-03 03:54:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:54:58.487911 | orchestrator | 2026-01-03 03:54:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:54:58.489492 | orchestrator | 2026-01-03 03:54:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:54:58.489530 | orchestrator | 2026-01-03 03:54:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:01.537724 | orchestrator | 2026-01-03 03:55:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:55:01.540052 | orchestrator | 2026-01-03 03:55:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:55:01.540157 | orchestrator | 2026-01-03 03:55:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:04.586488 | orchestrator | 2026-01-03 03:55:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:55:04.587941 | orchestrator | 2026-01-03 03:55:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:55:04.588114 | orchestrator | 2026-01-03 03:55:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:07.639443 | orchestrator | 2026-01-03 03:55:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:55:07.641004 | orchestrator | 2026-01-03 03:55:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:55:07.641032 | orchestrator | 2026-01-03 03:55:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:10.684996 | orchestrator | 2026-01-03 03:55:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:55:10.686602 | orchestrator | 2026-01-03 03:55:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:55:10.686723 | orchestrator | 2026-01-03 03:55:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:13.738828 | orchestrator | 2026-01-03 03:55:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:55:13.740379 | orchestrator | 2026-01-03 03:55:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:55:13.740457 | orchestrator | 2026-01-03 03:55:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:16.796126 | orchestrator | 2026-01-03 03:55:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:55:16.798176 | orchestrator | 2026-01-03 03:55:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:55:16.798224 | orchestrator | 2026-01-03 03:55:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:19.850048 | orchestrator | 2026-01-03 03:55:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:55:19.850988 | orchestrator | 2026-01-03 03:55:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:55:19.851063 | orchestrator | 2026-01-03 03:55:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:22.897998 | orchestrator | 2026-01-03 03:55:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:55:22.901254 | orchestrator | 2026-01-03 03:55:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:55:22.901431 | orchestrator | 2026-01-03 03:55:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:25.953272 | orchestrator | 2026-01-03 03:55:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:55:25.956546 | orchestrator | 2026-01-03 03:55:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:55:25.956674 | orchestrator | 2026-01-03 03:55:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:29.004303 | orchestrator | 2026-01-03 03:55:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:55:29.015094 | orchestrator | 2026-01-03 03:55:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:55:29.015209 | orchestrator | 2026-01-03 03:55:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:32.064220 | orchestrator | 2026-01-03 03:55:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:55:32.067227 | orchestrator | 2026-01-03 03:55:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:55:32.067298 | orchestrator | 2026-01-03 03:55:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:35.108514 | orchestrator | 2026-01-03 03:55:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:55:35.110564 | orchestrator | 2026-01-03 03:55:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:55:35.110758 | orchestrator | 2026-01-03 03:55:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:38.150996 | orchestrator | 2026-01-03 03:55:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:55:38.151452 | orchestrator | 2026-01-03 03:55:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:55:38.152136 | orchestrator | 2026-01-03 03:55:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:41.197991 | orchestrator | 2026-01-03 03:55:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:55:41.199865 | orchestrator | 2026-01-03 03:55:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:55:41.199927 | orchestrator | 2026-01-03 03:55:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:44.249488 | orchestrator | 2026-01-03 03:55:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:55:44.250332 | orchestrator | 2026-01-03 03:55:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:55:44.250367 | orchestrator | 2026-01-03 03:55:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:47.298747 | orchestrator | 2026-01-03 03:55:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:55:47.300686 | orchestrator | 2026-01-03 03:55:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:55:47.300725 | orchestrator | 2026-01-03 03:55:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:50.345770 | orchestrator | 2026-01-03 03:55:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:55:50.347473 | orchestrator | 2026-01-03 03:55:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:55:50.347505 | orchestrator | 2026-01-03 03:55:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:53.396906 | orchestrator | 2026-01-03 03:55:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:55:53.399057 | orchestrator | 2026-01-03 03:55:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:55:53.399082 | orchestrator | 2026-01-03 03:55:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:56.450290 | orchestrator | 2026-01-03 03:55:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:55:56.452155 | orchestrator | 2026-01-03 03:55:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:55:56.452223 | orchestrator | 2026-01-03 03:55:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:55:59.499443 | orchestrator | 2026-01-03 03:55:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:55:59.501426 | orchestrator | 2026-01-03 03:55:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:55:59.501507 | orchestrator | 2026-01-03 03:55:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:02.559291 | orchestrator | 2026-01-03 03:56:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:56:02.560919 | orchestrator | 2026-01-03 03:56:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:56:02.561028 | orchestrator | 2026-01-03 03:56:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:05.611187 | orchestrator | 2026-01-03 03:56:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:56:05.613743 | orchestrator | 2026-01-03 03:56:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:56:05.613812 | orchestrator | 2026-01-03 03:56:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:08.665608 | orchestrator | 2026-01-03 03:56:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:56:08.666851 | orchestrator | 2026-01-03 03:56:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:56:08.666917 | orchestrator | 2026-01-03 03:56:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:11.712459 | orchestrator | 2026-01-03 03:56:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:56:11.713887 | orchestrator | 2026-01-03 03:56:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:56:11.713948 | orchestrator | 2026-01-03 03:56:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:14.759279 | orchestrator | 2026-01-03 03:56:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:56:14.761664 | orchestrator | 2026-01-03 03:56:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:56:14.761731 | orchestrator | 2026-01-03 03:56:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:17.811612 | orchestrator | 2026-01-03 03:56:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:56:17.812862 | orchestrator | 2026-01-03 03:56:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:56:17.812884 | orchestrator | 2026-01-03 03:56:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:20.862924 | orchestrator | 2026-01-03 03:56:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:56:20.864978 | orchestrator | 2026-01-03 03:56:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:56:20.865013 | orchestrator | 2026-01-03 03:56:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:23.906001 | orchestrator | 2026-01-03 03:56:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:56:23.907763 | orchestrator | 2026-01-03 03:56:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:56:23.907834 | orchestrator | 2026-01-03 03:56:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:26.954939 | orchestrator | 2026-01-03 03:56:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:56:26.957205 | orchestrator | 2026-01-03 03:56:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:56:26.957324 | orchestrator | 2026-01-03 03:56:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:30.001020 | orchestrator | 2026-01-03 03:56:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:56:30.002664 | orchestrator | 2026-01-03 03:56:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:56:30.002714 | orchestrator | 2026-01-03 03:56:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:33.049691 | orchestrator | 2026-01-03 03:56:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:56:33.050897 | orchestrator | 2026-01-03 03:56:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:56:33.051131 | orchestrator | 2026-01-03 03:56:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:36.094280 | orchestrator | 2026-01-03 03:56:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:56:36.095880 | orchestrator | 2026-01-03 03:56:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:56:36.095946 | orchestrator | 2026-01-03 03:56:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:39.135815 | orchestrator | 2026-01-03 03:56:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:56:39.136911 | orchestrator | 2026-01-03 03:56:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:56:39.137103 | orchestrator | 2026-01-03 03:56:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:42.188591 | orchestrator | 2026-01-03 03:56:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:56:42.189922 | orchestrator | 2026-01-03 03:56:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:56:42.190009 | orchestrator | 2026-01-03 03:56:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:45.243487 | orchestrator | 2026-01-03 03:56:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:56:45.245018 | orchestrator | 2026-01-03 03:56:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:56:45.245199 | orchestrator | 2026-01-03 03:56:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:48.294942 | orchestrator | 2026-01-03 03:56:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:56:48.296701 | orchestrator | 2026-01-03 03:56:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:56:48.296757 | orchestrator | 2026-01-03 03:56:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:51.346698 | orchestrator | 2026-01-03 03:56:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:56:51.348111 | orchestrator | 2026-01-03 03:56:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:56:51.348166 | orchestrator | 2026-01-03 03:56:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:54.392142 | orchestrator | 2026-01-03 03:56:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:56:54.393119 | orchestrator | 2026-01-03 03:56:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:56:54.393165 | orchestrator | 2026-01-03 03:56:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:56:57.432270 | orchestrator | 2026-01-03 03:56:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:56:57.433806 | orchestrator | 2026-01-03 03:56:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:56:57.433966 | orchestrator | 2026-01-03 03:56:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:00.473034 | orchestrator | 2026-01-03 03:57:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:57:00.474512 | orchestrator | 2026-01-03 03:57:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:57:00.474583 | orchestrator | 2026-01-03 03:57:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:03.520964 | orchestrator | 2026-01-03 03:57:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:57:03.522814 | orchestrator | 2026-01-03 03:57:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:57:03.523647 | orchestrator | 2026-01-03 03:57:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:06.575184 | orchestrator | 2026-01-03 03:57:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:57:06.577001 | orchestrator | 2026-01-03 03:57:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:57:06.577037 | orchestrator | 2026-01-03 03:57:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:09.621347 | orchestrator | 2026-01-03 03:57:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:57:09.622744 | orchestrator | 2026-01-03 03:57:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:57:09.622892 | orchestrator | 2026-01-03 03:57:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:12.672521 | orchestrator | 2026-01-03 03:57:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:57:12.674930 | orchestrator | 2026-01-03 03:57:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:57:12.674965 | orchestrator | 2026-01-03 03:57:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:15.721149 | orchestrator | 2026-01-03 03:57:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:57:15.724081 | orchestrator | 2026-01-03 03:57:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:57:15.724139 | orchestrator | 2026-01-03 03:57:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:18.780359 | orchestrator | 2026-01-03 03:57:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:57:18.785271 | orchestrator | 2026-01-03 03:57:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:57:18.785354 | orchestrator | 2026-01-03 03:57:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:21.840442 | orchestrator | 2026-01-03 03:57:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:57:21.841916 | orchestrator | 2026-01-03 03:57:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:57:21.841996 | orchestrator | 2026-01-03 03:57:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:24.896938 | orchestrator | 2026-01-03 03:57:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:57:24.899394 | orchestrator | 2026-01-03 03:57:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:57:24.899762 | orchestrator | 2026-01-03 03:57:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:27.948219 | orchestrator | 2026-01-03 03:57:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:57:27.950175 | orchestrator | 2026-01-03 03:57:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:57:27.950577 | orchestrator | 2026-01-03 03:57:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:31.002328 | orchestrator | 2026-01-03 03:57:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:57:31.004976 | orchestrator | 2026-01-03 03:57:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:57:31.005058 | orchestrator | 2026-01-03 03:57:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:34.053087 | orchestrator | 2026-01-03 03:57:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:57:34.053395 | orchestrator | 2026-01-03 03:57:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:57:34.053635 | orchestrator | 2026-01-03 03:57:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:37.107069 | orchestrator | 2026-01-03 03:57:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:57:37.109138 | orchestrator | 2026-01-03 03:57:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:57:37.109262 | orchestrator | 2026-01-03 03:57:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:40.145155 | orchestrator | 2026-01-03 03:57:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:57:40.146887 | orchestrator | 2026-01-03 03:57:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:57:40.146975 | orchestrator | 2026-01-03 03:57:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:43.197188 | orchestrator | 2026-01-03 03:57:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:57:43.199516 | orchestrator | 2026-01-03 03:57:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:57:43.199582 | orchestrator | 2026-01-03 03:57:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:46.251490 | orchestrator | 2026-01-03 03:57:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:57:46.253093 | orchestrator | 2026-01-03 03:57:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:57:46.253250 | orchestrator | 2026-01-03 03:57:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:49.291715 | orchestrator | 2026-01-03 03:57:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:57:49.294578 | orchestrator | 2026-01-03 03:57:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:57:49.294675 | orchestrator | 2026-01-03 03:57:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:52.344018 | orchestrator | 2026-01-03 03:57:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:57:52.346367 | orchestrator | 2026-01-03 03:57:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:57:52.346493 | orchestrator | 2026-01-03 03:57:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:55.391928 | orchestrator | 2026-01-03 03:57:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:57:55.393253 | orchestrator | 2026-01-03 03:57:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:57:55.393432 | orchestrator | 2026-01-03 03:57:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:57:58.444610 | orchestrator | 2026-01-03 03:57:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:57:58.447174 | orchestrator | 2026-01-03 03:57:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:57:58.447236 | orchestrator | 2026-01-03 03:57:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:01.497573 | orchestrator | 2026-01-03 03:58:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:58:01.499802 | orchestrator | 2026-01-03 03:58:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:58:01.499862 | orchestrator | 2026-01-03 03:58:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:04.547464 | orchestrator | 2026-01-03 03:58:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:58:04.549718 | orchestrator | 2026-01-03 03:58:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:58:04.549802 | orchestrator | 2026-01-03 03:58:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:07.598742 | orchestrator | 2026-01-03 03:58:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:58:07.600479 | orchestrator | 2026-01-03 03:58:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:58:07.600555 | orchestrator | 2026-01-03 03:58:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:10.646357 | orchestrator | 2026-01-03 03:58:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:58:10.648237 | orchestrator | 2026-01-03 03:58:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:58:10.648478 | orchestrator | 2026-01-03 03:58:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:13.694752 | orchestrator | 2026-01-03 03:58:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:58:13.699629 | orchestrator | 2026-01-03 03:58:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:58:13.699688 | orchestrator | 2026-01-03 03:58:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:16.748333 | orchestrator | 2026-01-03 03:58:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:58:16.749463 | orchestrator | 2026-01-03 03:58:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:58:16.749496 | orchestrator | 2026-01-03 03:58:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:19.795705 | orchestrator | 2026-01-03 03:58:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:58:19.797662 | orchestrator | 2026-01-03 03:58:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:58:19.797711 | orchestrator | 2026-01-03 03:58:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:22.840413 | orchestrator | 2026-01-03 03:58:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:58:22.843195 | orchestrator | 2026-01-03 03:58:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:58:22.843341 | orchestrator | 2026-01-03 03:58:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:25.898008 | orchestrator | 2026-01-03 03:58:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:58:25.900019 | orchestrator | 2026-01-03 03:58:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:58:25.900108 | orchestrator | 2026-01-03 03:58:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:28.949927 | orchestrator | 2026-01-03 03:58:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:58:28.951498 | orchestrator | 2026-01-03 03:58:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:58:28.951568 | orchestrator | 2026-01-03 03:58:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:32.000736 | orchestrator | 2026-01-03 03:58:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:58:32.003654 | orchestrator | 2026-01-03 03:58:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:58:32.003752 | orchestrator | 2026-01-03 03:58:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:35.050830 | orchestrator | 2026-01-03 03:58:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:58:35.052964 | orchestrator | 2026-01-03 03:58:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:58:35.053023 | orchestrator | 2026-01-03 03:58:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:38.100591 | orchestrator | 2026-01-03 03:58:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:58:38.100746 | orchestrator | 2026-01-03 03:58:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:58:38.100762 | orchestrator | 2026-01-03 03:58:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:41.141505 | orchestrator | 2026-01-03 03:58:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:58:41.143947 | orchestrator | 2026-01-03 03:58:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:58:41.144002 | orchestrator | 2026-01-03 03:58:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:44.189182 | orchestrator | 2026-01-03 03:58:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:58:44.190783 | orchestrator | 2026-01-03 03:58:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:58:44.191203 | orchestrator | 2026-01-03 03:58:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:47.236368 | orchestrator | 2026-01-03 03:58:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:58:47.237647 | orchestrator | 2026-01-03 03:58:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:58:47.237760 | orchestrator | 2026-01-03 03:58:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:50.287551 | orchestrator | 2026-01-03 03:58:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:58:50.288518 | orchestrator | 2026-01-03 03:58:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:58:50.288812 | orchestrator | 2026-01-03 03:58:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:53.332935 | orchestrator | 2026-01-03 03:58:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:58:53.334863 | orchestrator | 2026-01-03 03:58:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:58:53.334950 | orchestrator | 2026-01-03 03:58:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:56.380971 | orchestrator | 2026-01-03 03:58:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:58:56.383064 | orchestrator | 2026-01-03 03:58:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:58:56.383160 | orchestrator | 2026-01-03 03:58:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:58:59.430823 | orchestrator | 2026-01-03 03:58:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:58:59.432600 | orchestrator | 2026-01-03 03:58:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:58:59.432706 | orchestrator | 2026-01-03 03:58:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:02.473369 | orchestrator | 2026-01-03 03:59:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:59:02.475175 | orchestrator | 2026-01-03 03:59:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:59:02.475285 | orchestrator | 2026-01-03 03:59:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:05.525057 | orchestrator | 2026-01-03 03:59:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:59:05.526426 | orchestrator | 2026-01-03 03:59:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:59:05.526472 | orchestrator | 2026-01-03 03:59:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:08.575561 | orchestrator | 2026-01-03 03:59:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:59:08.578414 | orchestrator | 2026-01-03 03:59:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:59:08.578471 | orchestrator | 2026-01-03 03:59:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:11.631418 | orchestrator | 2026-01-03 03:59:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:59:11.632546 | orchestrator | 2026-01-03 03:59:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:59:11.632688 | orchestrator | 2026-01-03 03:59:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:14.680845 | orchestrator | 2026-01-03 03:59:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:59:14.682107 | orchestrator | 2026-01-03 03:59:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:59:14.682162 | orchestrator | 2026-01-03 03:59:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:17.736534 | orchestrator | 2026-01-03 03:59:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:59:17.738484 | orchestrator | 2026-01-03 03:59:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:59:17.738559 | orchestrator | 2026-01-03 03:59:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:20.789136 | orchestrator | 2026-01-03 03:59:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:59:20.791556 | orchestrator | 2026-01-03 03:59:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:59:20.792055 | orchestrator | 2026-01-03 03:59:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:23.843457 | orchestrator | 2026-01-03 03:59:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:59:23.844554 | orchestrator | 2026-01-03 03:59:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:59:23.844604 | orchestrator | 2026-01-03 03:59:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:26.896038 | orchestrator | 2026-01-03 03:59:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:59:26.897872 | orchestrator | 2026-01-03 03:59:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:59:26.897888 | orchestrator | 2026-01-03 03:59:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:29.942757 | orchestrator | 2026-01-03 03:59:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:59:29.945514 | orchestrator | 2026-01-03 03:59:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:59:29.945601 | orchestrator | 2026-01-03 03:59:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:32.996780 | orchestrator | 2026-01-03 03:59:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:59:32.998839 | orchestrator | 2026-01-03 03:59:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:59:32.998920 | orchestrator | 2026-01-03 03:59:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:36.048627 | orchestrator | 2026-01-03 03:59:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:59:36.050297 | orchestrator | 2026-01-03 03:59:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:59:36.050355 | orchestrator | 2026-01-03 03:59:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:39.087719 | orchestrator | 2026-01-03 03:59:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:59:39.089125 | orchestrator | 2026-01-03 03:59:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:59:39.089587 | orchestrator | 2026-01-03 03:59:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:42.134273 | orchestrator | 2026-01-03 03:59:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:59:42.136072 | orchestrator | 2026-01-03 03:59:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:59:42.136110 | orchestrator | 2026-01-03 03:59:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:45.181221 | orchestrator | 2026-01-03 03:59:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:59:45.182651 | orchestrator | 2026-01-03 03:59:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:59:45.182795 | orchestrator | 2026-01-03 03:59:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:48.224174 | orchestrator | 2026-01-03 03:59:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:59:48.226250 | orchestrator | 2026-01-03 03:59:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:59:48.226329 | orchestrator | 2026-01-03 03:59:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:51.273003 | orchestrator | 2026-01-03 03:59:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:59:51.276385 | orchestrator | 2026-01-03 03:59:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:59:51.276491 | orchestrator | 2026-01-03 03:59:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:54.323803 | orchestrator | 2026-01-03 03:59:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:59:54.325491 | orchestrator | 2026-01-03 03:59:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:59:54.325527 | orchestrator | 2026-01-03 03:59:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 03:59:57.370726 | orchestrator | 2026-01-03 03:59:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 03:59:57.371647 | orchestrator | 2026-01-03 03:59:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 03:59:57.371737 | orchestrator | 2026-01-03 03:59:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:00.420983 | orchestrator | 2026-01-03 04:00:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:00:00.423624 | orchestrator | 2026-01-03 04:00:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:00:00.423680 | orchestrator | 2026-01-03 04:00:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:03.476646 | orchestrator | 2026-01-03 04:00:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:00:03.478599 | orchestrator | 2026-01-03 04:00:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:00:03.478653 | orchestrator | 2026-01-03 04:00:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:06.523132 | orchestrator | 2026-01-03 04:00:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:00:06.525143 | orchestrator | 2026-01-03 04:00:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:00:06.525218 | orchestrator | 2026-01-03 04:00:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:09.566873 | orchestrator | 2026-01-03 04:00:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:00:09.567797 | orchestrator | 2026-01-03 04:00:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:00:09.567860 | orchestrator | 2026-01-03 04:00:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:12.615467 | orchestrator | 2026-01-03 04:00:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:00:12.617803 | orchestrator | 2026-01-03 04:00:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:00:12.617861 | orchestrator | 2026-01-03 04:00:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:15.662310 | orchestrator | 2026-01-03 04:00:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:00:15.665465 | orchestrator | 2026-01-03 04:00:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:00:15.665563 | orchestrator | 2026-01-03 04:00:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:18.713687 | orchestrator | 2026-01-03 04:00:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:00:18.715533 | orchestrator | 2026-01-03 04:00:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:00:18.715568 | orchestrator | 2026-01-03 04:00:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:21.760921 | orchestrator | 2026-01-03 04:00:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:00:21.763166 | orchestrator | 2026-01-03 04:00:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:00:21.763259 | orchestrator | 2026-01-03 04:00:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:24.808125 | orchestrator | 2026-01-03 04:00:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:00:24.810215 | orchestrator | 2026-01-03 04:00:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:00:24.810271 | orchestrator | 2026-01-03 04:00:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:27.854733 | orchestrator | 2026-01-03 04:00:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:00:27.855875 | orchestrator | 2026-01-03 04:00:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:00:27.855907 | orchestrator | 2026-01-03 04:00:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:30.907008 | orchestrator | 2026-01-03 04:00:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:00:30.907719 | orchestrator | 2026-01-03 04:00:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:00:30.907771 | orchestrator | 2026-01-03 04:00:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:33.956422 | orchestrator | 2026-01-03 04:00:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:00:33.957573 | orchestrator | 2026-01-03 04:00:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:00:33.957614 | orchestrator | 2026-01-03 04:00:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:37.002155 | orchestrator | 2026-01-03 04:00:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:00:37.003689 | orchestrator | 2026-01-03 04:00:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:00:37.004134 | orchestrator | 2026-01-03 04:00:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:40.041857 | orchestrator | 2026-01-03 04:00:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:00:40.044041 | orchestrator | 2026-01-03 04:00:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:00:40.044197 | orchestrator | 2026-01-03 04:00:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:43.089236 | orchestrator | 2026-01-03 04:00:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:00:43.090862 | orchestrator | 2026-01-03 04:00:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:00:43.090909 | orchestrator | 2026-01-03 04:00:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:46.144129 | orchestrator | 2026-01-03 04:00:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:00:46.146202 | orchestrator | 2026-01-03 04:00:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:00:46.146456 | orchestrator | 2026-01-03 04:00:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:49.189317 | orchestrator | 2026-01-03 04:00:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:00:49.193088 | orchestrator | 2026-01-03 04:00:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:00:49.193258 | orchestrator | 2026-01-03 04:00:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:52.237562 | orchestrator | 2026-01-03 04:00:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:00:52.238492 | orchestrator | 2026-01-03 04:00:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:00:52.238529 | orchestrator | 2026-01-03 04:00:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:55.279597 | orchestrator | 2026-01-03 04:00:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:00:55.281331 | orchestrator | 2026-01-03 04:00:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:00:55.281388 | orchestrator | 2026-01-03 04:00:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:00:58.332190 | orchestrator | 2026-01-03 04:00:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:00:58.333482 | orchestrator | 2026-01-03 04:00:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:00:58.333558 | orchestrator | 2026-01-03 04:00:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:01.384743 | orchestrator | 2026-01-03 04:01:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:01:01.386448 | orchestrator | 2026-01-03 04:01:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:01:01.386482 | orchestrator | 2026-01-03 04:01:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:04.438274 | orchestrator | 2026-01-03 04:01:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:01:04.439889 | orchestrator | 2026-01-03 04:01:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:01:04.439949 | orchestrator | 2026-01-03 04:01:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:07.492410 | orchestrator | 2026-01-03 04:01:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:01:07.494769 | orchestrator | 2026-01-03 04:01:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:01:07.494802 | orchestrator | 2026-01-03 04:01:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:10.548296 | orchestrator | 2026-01-03 04:01:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:01:10.550823 | orchestrator | 2026-01-03 04:01:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:01:10.550894 | orchestrator | 2026-01-03 04:01:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:13.600217 | orchestrator | 2026-01-03 04:01:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:01:13.603778 | orchestrator | 2026-01-03 04:01:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:01:13.603962 | orchestrator | 2026-01-03 04:01:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:16.663363 | orchestrator | 2026-01-03 04:01:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:01:16.664830 | orchestrator | 2026-01-03 04:01:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:01:16.665054 | orchestrator | 2026-01-03 04:01:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:19.706205 | orchestrator | 2026-01-03 04:01:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:01:19.706748 | orchestrator | 2026-01-03 04:01:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:01:19.706784 | orchestrator | 2026-01-03 04:01:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:22.755153 | orchestrator | 2026-01-03 04:01:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:01:22.756566 | orchestrator | 2026-01-03 04:01:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:01:22.756648 | orchestrator | 2026-01-03 04:01:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:25.804077 | orchestrator | 2026-01-03 04:01:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:01:25.805987 | orchestrator | 2026-01-03 04:01:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:01:25.806120 | orchestrator | 2026-01-03 04:01:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:28.855252 | orchestrator | 2026-01-03 04:01:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:01:28.857736 | orchestrator | 2026-01-03 04:01:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:01:28.857858 | orchestrator | 2026-01-03 04:01:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:31.904416 | orchestrator | 2026-01-03 04:01:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:01:31.905744 | orchestrator | 2026-01-03 04:01:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:01:31.905984 | orchestrator | 2026-01-03 04:01:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:34.949275 | orchestrator | 2026-01-03 04:01:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:01:34.950771 | orchestrator | 2026-01-03 04:01:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:01:34.950804 | orchestrator | 2026-01-03 04:01:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:37.996582 | orchestrator | 2026-01-03 04:01:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:01:37.998601 | orchestrator | 2026-01-03 04:01:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:01:37.998706 | orchestrator | 2026-01-03 04:01:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:41.053849 | orchestrator | 2026-01-03 04:01:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:01:41.055312 | orchestrator | 2026-01-03 04:01:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:01:41.055800 | orchestrator | 2026-01-03 04:01:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:44.097613 | orchestrator | 2026-01-03 04:01:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:01:44.098012 | orchestrator | 2026-01-03 04:01:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:01:44.098068 | orchestrator | 2026-01-03 04:01:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:47.149487 | orchestrator | 2026-01-03 04:01:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:01:47.151894 | orchestrator | 2026-01-03 04:01:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:01:47.152058 | orchestrator | 2026-01-03 04:01:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:50.198296 | orchestrator | 2026-01-03 04:01:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:01:50.200270 | orchestrator | 2026-01-03 04:01:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:01:50.200424 | orchestrator | 2026-01-03 04:01:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:53.247065 | orchestrator | 2026-01-03 04:01:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:01:53.248855 | orchestrator | 2026-01-03 04:01:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:01:53.249168 | orchestrator | 2026-01-03 04:01:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:56.292443 | orchestrator | 2026-01-03 04:01:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:01:56.294854 | orchestrator | 2026-01-03 04:01:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:01:56.294929 | orchestrator | 2026-01-03 04:01:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:01:59.332422 | orchestrator | 2026-01-03 04:01:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:01:59.335901 | orchestrator | 2026-01-03 04:01:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:01:59.335958 | orchestrator | 2026-01-03 04:01:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:02.385229 | orchestrator | 2026-01-03 04:02:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:02:02.389195 | orchestrator | 2026-01-03 04:02:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:02:02.389290 | orchestrator | 2026-01-03 04:02:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:05.434369 | orchestrator | 2026-01-03 04:02:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:02:05.435418 | orchestrator | 2026-01-03 04:02:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:02:05.435452 | orchestrator | 2026-01-03 04:02:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:08.482322 | orchestrator | 2026-01-03 04:02:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:02:08.484082 | orchestrator | 2026-01-03 04:02:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:02:08.484229 | orchestrator | 2026-01-03 04:02:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:11.535352 | orchestrator | 2026-01-03 04:02:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:02:11.537364 | orchestrator | 2026-01-03 04:02:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:02:11.537458 | orchestrator | 2026-01-03 04:02:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:14.585924 | orchestrator | 2026-01-03 04:02:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:02:14.588515 | orchestrator | 2026-01-03 04:02:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:02:14.588588 | orchestrator | 2026-01-03 04:02:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:17.637805 | orchestrator | 2026-01-03 04:02:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:02:17.639824 | orchestrator | 2026-01-03 04:02:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:02:17.640196 | orchestrator | 2026-01-03 04:02:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:20.688872 | orchestrator | 2026-01-03 04:02:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:02:20.690968 | orchestrator | 2026-01-03 04:02:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:02:20.691008 | orchestrator | 2026-01-03 04:02:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:23.738424 | orchestrator | 2026-01-03 04:02:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:02:23.741081 | orchestrator | 2026-01-03 04:02:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:02:23.741130 | orchestrator | 2026-01-03 04:02:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:26.790388 | orchestrator | 2026-01-03 04:02:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:02:26.792277 | orchestrator | 2026-01-03 04:02:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:02:26.792363 | orchestrator | 2026-01-03 04:02:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:29.840880 | orchestrator | 2026-01-03 04:02:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:02:29.842327 | orchestrator | 2026-01-03 04:02:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:02:29.842359 | orchestrator | 2026-01-03 04:02:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:32.894271 | orchestrator | 2026-01-03 04:02:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:02:32.896221 | orchestrator | 2026-01-03 04:02:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:02:32.896270 | orchestrator | 2026-01-03 04:02:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:35.943204 | orchestrator | 2026-01-03 04:02:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:02:35.945163 | orchestrator | 2026-01-03 04:02:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:02:35.945223 | orchestrator | 2026-01-03 04:02:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:38.992417 | orchestrator | 2026-01-03 04:02:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:02:38.994585 | orchestrator | 2026-01-03 04:02:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:02:38.994791 | orchestrator | 2026-01-03 04:02:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:42.038091 | orchestrator | 2026-01-03 04:02:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:02:42.040528 | orchestrator | 2026-01-03 04:02:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:02:42.040597 | orchestrator | 2026-01-03 04:02:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:45.082336 | orchestrator | 2026-01-03 04:02:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:02:45.084590 | orchestrator | 2026-01-03 04:02:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:02:45.084662 | orchestrator | 2026-01-03 04:02:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:48.133705 | orchestrator | 2026-01-03 04:02:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:02:48.135564 | orchestrator | 2026-01-03 04:02:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:02:48.135630 | orchestrator | 2026-01-03 04:02:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:51.182190 | orchestrator | 2026-01-03 04:02:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:02:51.184436 | orchestrator | 2026-01-03 04:02:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:02:51.184512 | orchestrator | 2026-01-03 04:02:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:54.233470 | orchestrator | 2026-01-03 04:02:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:02:54.234931 | orchestrator | 2026-01-03 04:02:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:02:54.234966 | orchestrator | 2026-01-03 04:02:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:02:57.278415 | orchestrator | 2026-01-03 04:02:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:02:57.281131 | orchestrator | 2026-01-03 04:02:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:02:57.281238 | orchestrator | 2026-01-03 04:02:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:00.321775 | orchestrator | 2026-01-03 04:03:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:03:00.323062 | orchestrator | 2026-01-03 04:03:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:03:00.323094 | orchestrator | 2026-01-03 04:03:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:03.372564 | orchestrator | 2026-01-03 04:03:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:03:03.374116 | orchestrator | 2026-01-03 04:03:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:03:03.374163 | orchestrator | 2026-01-03 04:03:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:06.426443 | orchestrator | 2026-01-03 04:03:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:03:06.428243 | orchestrator | 2026-01-03 04:03:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:03:06.428290 | orchestrator | 2026-01-03 04:03:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:09.478068 | orchestrator | 2026-01-03 04:03:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:03:09.481229 | orchestrator | 2026-01-03 04:03:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:03:09.481278 | orchestrator | 2026-01-03 04:03:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:12.528753 | orchestrator | 2026-01-03 04:03:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:03:12.530410 | orchestrator | 2026-01-03 04:03:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:03:12.530501 | orchestrator | 2026-01-03 04:03:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:15.571509 | orchestrator | 2026-01-03 04:03:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:03:15.573977 | orchestrator | 2026-01-03 04:03:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:03:15.574225 | orchestrator | 2026-01-03 04:03:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:18.624183 | orchestrator | 2026-01-03 04:03:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:03:18.626454 | orchestrator | 2026-01-03 04:03:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:03:18.626516 | orchestrator | 2026-01-03 04:03:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:21.677642 | orchestrator | 2026-01-03 04:03:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:03:21.679140 | orchestrator | 2026-01-03 04:03:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:03:21.679194 | orchestrator | 2026-01-03 04:03:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:24.722614 | orchestrator | 2026-01-03 04:03:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:03:24.723769 | orchestrator | 2026-01-03 04:03:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:03:24.724315 | orchestrator | 2026-01-03 04:03:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:27.773237 | orchestrator | 2026-01-03 04:03:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:03:27.775010 | orchestrator | 2026-01-03 04:03:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:03:27.775318 | orchestrator | 2026-01-03 04:03:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:30.826622 | orchestrator | 2026-01-03 04:03:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:03:30.829425 | orchestrator | 2026-01-03 04:03:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:03:30.829524 | orchestrator | 2026-01-03 04:03:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:33.880909 | orchestrator | 2026-01-03 04:03:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:03:33.882706 | orchestrator | 2026-01-03 04:03:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:03:33.882765 | orchestrator | 2026-01-03 04:03:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:36.933347 | orchestrator | 2026-01-03 04:03:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:03:36.935256 | orchestrator | 2026-01-03 04:03:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:03:36.935393 | orchestrator | 2026-01-03 04:03:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:39.978590 | orchestrator | 2026-01-03 04:03:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:03:39.980767 | orchestrator | 2026-01-03 04:03:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:03:39.980825 | orchestrator | 2026-01-03 04:03:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:43.026354 | orchestrator | 2026-01-03 04:03:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:03:43.027341 | orchestrator | 2026-01-03 04:03:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:03:43.027389 | orchestrator | 2026-01-03 04:03:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:46.072342 | orchestrator | 2026-01-03 04:03:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:03:46.074531 | orchestrator | 2026-01-03 04:03:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:03:46.074581 | orchestrator | 2026-01-03 04:03:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:49.113056 | orchestrator | 2026-01-03 04:03:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:03:49.115470 | orchestrator | 2026-01-03 04:03:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:03:49.115533 | orchestrator | 2026-01-03 04:03:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:52.164738 | orchestrator | 2026-01-03 04:03:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:03:52.166771 | orchestrator | 2026-01-03 04:03:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:03:52.166830 | orchestrator | 2026-01-03 04:03:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:55.212906 | orchestrator | 2026-01-03 04:03:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:03:55.214374 | orchestrator | 2026-01-03 04:03:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:03:55.214407 | orchestrator | 2026-01-03 04:03:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:03:58.253578 | orchestrator | 2026-01-03 04:03:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:03:58.255723 | orchestrator | 2026-01-03 04:03:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:03:58.256000 | orchestrator | 2026-01-03 04:03:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:01.305396 | orchestrator | 2026-01-03 04:04:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:04:01.307687 | orchestrator | 2026-01-03 04:04:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:04:01.307815 | orchestrator | 2026-01-03 04:04:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:04.355710 | orchestrator | 2026-01-03 04:04:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:04:04.358294 | orchestrator | 2026-01-03 04:04:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:04:04.358375 | orchestrator | 2026-01-03 04:04:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:07.407979 | orchestrator | 2026-01-03 04:04:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:04:07.409699 | orchestrator | 2026-01-03 04:04:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:04:07.410003 | orchestrator | 2026-01-03 04:04:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:10.463643 | orchestrator | 2026-01-03 04:04:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:04:10.465716 | orchestrator | 2026-01-03 04:04:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:04:10.465947 | orchestrator | 2026-01-03 04:04:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:13.515484 | orchestrator | 2026-01-03 04:04:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:04:13.518349 | orchestrator | 2026-01-03 04:04:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:04:13.518411 | orchestrator | 2026-01-03 04:04:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:16.559809 | orchestrator | 2026-01-03 04:04:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:04:16.561650 | orchestrator | 2026-01-03 04:04:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:04:16.561690 | orchestrator | 2026-01-03 04:04:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:19.613598 | orchestrator | 2026-01-03 04:04:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:04:19.616012 | orchestrator | 2026-01-03 04:04:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:04:19.616061 | orchestrator | 2026-01-03 04:04:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:22.664599 | orchestrator | 2026-01-03 04:04:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:04:22.667088 | orchestrator | 2026-01-03 04:04:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:04:22.667253 | orchestrator | 2026-01-03 04:04:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:25.714486 | orchestrator | 2026-01-03 04:04:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:04:25.715548 | orchestrator | 2026-01-03 04:04:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:04:25.715622 | orchestrator | 2026-01-03 04:04:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:28.764364 | orchestrator | 2026-01-03 04:04:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:04:28.765979 | orchestrator | 2026-01-03 04:04:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:04:28.766882 | orchestrator | 2026-01-03 04:04:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:31.809796 | orchestrator | 2026-01-03 04:04:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:04:31.811437 | orchestrator | 2026-01-03 04:04:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:04:31.811557 | orchestrator | 2026-01-03 04:04:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:34.849277 | orchestrator | 2026-01-03 04:04:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:04:34.850473 | orchestrator | 2026-01-03 04:04:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:04:34.850731 | orchestrator | 2026-01-03 04:04:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:37.897192 | orchestrator | 2026-01-03 04:04:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:04:37.898301 | orchestrator | 2026-01-03 04:04:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:04:37.898358 | orchestrator | 2026-01-03 04:04:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:40.946926 | orchestrator | 2026-01-03 04:04:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:04:40.948571 | orchestrator | 2026-01-03 04:04:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:04:40.948702 | orchestrator | 2026-01-03 04:04:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:44.011456 | orchestrator | 2026-01-03 04:04:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:04:44.013138 | orchestrator | 2026-01-03 04:04:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:04:44.013216 | orchestrator | 2026-01-03 04:04:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:47.059673 | orchestrator | 2026-01-03 04:04:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:04:47.060693 | orchestrator | 2026-01-03 04:04:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:04:47.060752 | orchestrator | 2026-01-03 04:04:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:50.111265 | orchestrator | 2026-01-03 04:04:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:04:50.113250 | orchestrator | 2026-01-03 04:04:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:04:50.113312 | orchestrator | 2026-01-03 04:04:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:53.162102 | orchestrator | 2026-01-03 04:04:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:04:53.162605 | orchestrator | 2026-01-03 04:04:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:04:53.162628 | orchestrator | 2026-01-03 04:04:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:56.218485 | orchestrator | 2026-01-03 04:04:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:04:56.220385 | orchestrator | 2026-01-03 04:04:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:04:56.220459 | orchestrator | 2026-01-03 04:04:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:04:59.270972 | orchestrator | 2026-01-03 04:04:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:04:59.271074 | orchestrator | 2026-01-03 04:04:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:04:59.271094 | orchestrator | 2026-01-03 04:04:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:02.314728 | orchestrator | 2026-01-03 04:05:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:05:02.316676 | orchestrator | 2026-01-03 04:05:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:05:02.316986 | orchestrator | 2026-01-03 04:05:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:05.367217 | orchestrator | 2026-01-03 04:05:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:05:05.370188 | orchestrator | 2026-01-03 04:05:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:05:05.370247 | orchestrator | 2026-01-03 04:05:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:08.416773 | orchestrator | 2026-01-03 04:05:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:05:08.418614 | orchestrator | 2026-01-03 04:05:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:05:08.418665 | orchestrator | 2026-01-03 04:05:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:11.461538 | orchestrator | 2026-01-03 04:05:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:05:11.463290 | orchestrator | 2026-01-03 04:05:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:05:11.463340 | orchestrator | 2026-01-03 04:05:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:14.514594 | orchestrator | 2026-01-03 04:05:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:05:14.516492 | orchestrator | 2026-01-03 04:05:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:05:14.516524 | orchestrator | 2026-01-03 04:05:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:17.567311 | orchestrator | 2026-01-03 04:05:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:05:17.569256 | orchestrator | 2026-01-03 04:05:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:05:17.569375 | orchestrator | 2026-01-03 04:05:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:20.620736 | orchestrator | 2026-01-03 04:05:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:05:20.622998 | orchestrator | 2026-01-03 04:05:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:05:20.623062 | orchestrator | 2026-01-03 04:05:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:23.672948 | orchestrator | 2026-01-03 04:05:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:05:23.675928 | orchestrator | 2026-01-03 04:05:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:05:23.676055 | orchestrator | 2026-01-03 04:05:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:26.726866 | orchestrator | 2026-01-03 04:05:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:05:26.728311 | orchestrator | 2026-01-03 04:05:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:05:26.728356 | orchestrator | 2026-01-03 04:05:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:29.778344 | orchestrator | 2026-01-03 04:05:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:05:29.780233 | orchestrator | 2026-01-03 04:05:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:05:29.780323 | orchestrator | 2026-01-03 04:05:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:32.825324 | orchestrator | 2026-01-03 04:05:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:05:32.828885 | orchestrator | 2026-01-03 04:05:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:05:32.828957 | orchestrator | 2026-01-03 04:05:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:35.873956 | orchestrator | 2026-01-03 04:05:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:05:35.875130 | orchestrator | 2026-01-03 04:05:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:05:35.875219 | orchestrator | 2026-01-03 04:05:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:38.924330 | orchestrator | 2026-01-03 04:05:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:05:38.926398 | orchestrator | 2026-01-03 04:05:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:05:38.926566 | orchestrator | 2026-01-03 04:05:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:41.976080 | orchestrator | 2026-01-03 04:05:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:05:41.977398 | orchestrator | 2026-01-03 04:05:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:05:41.977479 | orchestrator | 2026-01-03 04:05:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:45.031185 | orchestrator | 2026-01-03 04:05:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:05:45.032736 | orchestrator | 2026-01-03 04:05:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:05:45.032769 | orchestrator | 2026-01-03 04:05:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:48.078769 | orchestrator | 2026-01-03 04:05:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:05:48.080464 | orchestrator | 2026-01-03 04:05:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:05:48.080526 | orchestrator | 2026-01-03 04:05:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:51.120557 | orchestrator | 2026-01-03 04:05:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:05:51.122362 | orchestrator | 2026-01-03 04:05:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:05:51.122463 | orchestrator | 2026-01-03 04:05:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:54.173211 | orchestrator | 2026-01-03 04:05:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:05:54.174756 | orchestrator | 2026-01-03 04:05:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:05:54.174969 | orchestrator | 2026-01-03 04:05:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:05:57.227738 | orchestrator | 2026-01-03 04:05:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:05:57.230240 | orchestrator | 2026-01-03 04:05:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:05:57.230307 | orchestrator | 2026-01-03 04:05:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:00.279028 | orchestrator | 2026-01-03 04:06:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:06:00.280675 | orchestrator | 2026-01-03 04:06:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:06:00.280731 | orchestrator | 2026-01-03 04:06:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:03.335051 | orchestrator | 2026-01-03 04:06:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:06:03.336040 | orchestrator | 2026-01-03 04:06:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:06:03.336234 | orchestrator | 2026-01-03 04:06:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:06.384434 | orchestrator | 2026-01-03 04:06:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:06:06.386382 | orchestrator | 2026-01-03 04:06:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:06:06.386463 | orchestrator | 2026-01-03 04:06:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:09.432303 | orchestrator | 2026-01-03 04:06:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:06:09.433864 | orchestrator | 2026-01-03 04:06:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:06:09.433994 | orchestrator | 2026-01-03 04:06:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:12.479113 | orchestrator | 2026-01-03 04:06:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:06:12.479817 | orchestrator | 2026-01-03 04:06:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:06:12.479900 | orchestrator | 2026-01-03 04:06:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:15.523279 | orchestrator | 2026-01-03 04:06:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:06:15.525358 | orchestrator | 2026-01-03 04:06:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:06:15.525422 | orchestrator | 2026-01-03 04:06:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:18.569583 | orchestrator | 2026-01-03 04:06:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:06:18.573906 | orchestrator | 2026-01-03 04:06:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:06:18.573987 | orchestrator | 2026-01-03 04:06:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:21.623965 | orchestrator | 2026-01-03 04:06:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:06:21.625925 | orchestrator | 2026-01-03 04:06:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:06:21.626341 | orchestrator | 2026-01-03 04:06:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:24.676843 | orchestrator | 2026-01-03 04:06:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:06:24.678298 | orchestrator | 2026-01-03 04:06:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:06:24.678338 | orchestrator | 2026-01-03 04:06:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:27.726152 | orchestrator | 2026-01-03 04:06:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:06:27.727300 | orchestrator | 2026-01-03 04:06:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:06:27.727367 | orchestrator | 2026-01-03 04:06:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:30.776301 | orchestrator | 2026-01-03 04:06:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:06:30.777667 | orchestrator | 2026-01-03 04:06:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:06:30.777706 | orchestrator | 2026-01-03 04:06:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:33.821426 | orchestrator | 2026-01-03 04:06:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:06:33.823581 | orchestrator | 2026-01-03 04:06:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:06:33.823633 | orchestrator | 2026-01-03 04:06:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:36.868461 | orchestrator | 2026-01-03 04:06:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:06:36.869946 | orchestrator | 2026-01-03 04:06:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:06:36.870090 | orchestrator | 2026-01-03 04:06:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:39.917303 | orchestrator | 2026-01-03 04:06:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:06:39.919232 | orchestrator | 2026-01-03 04:06:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:06:39.919269 | orchestrator | 2026-01-03 04:06:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:42.961818 | orchestrator | 2026-01-03 04:06:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:06:42.964250 | orchestrator | 2026-01-03 04:06:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:06:42.964332 | orchestrator | 2026-01-03 04:06:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:46.015897 | orchestrator | 2026-01-03 04:06:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:06:46.017994 | orchestrator | 2026-01-03 04:06:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:06:46.018153 | orchestrator | 2026-01-03 04:06:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:49.071100 | orchestrator | 2026-01-03 04:06:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:06:49.071200 | orchestrator | 2026-01-03 04:06:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:06:49.071222 | orchestrator | 2026-01-03 04:06:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:52.120321 | orchestrator | 2026-01-03 04:06:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:06:52.122589 | orchestrator | 2026-01-03 04:06:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:06:52.122631 | orchestrator | 2026-01-03 04:06:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:55.165722 | orchestrator | 2026-01-03 04:06:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:06:55.167281 | orchestrator | 2026-01-03 04:06:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:06:55.167345 | orchestrator | 2026-01-03 04:06:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:06:58.215373 | orchestrator | 2026-01-03 04:06:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:06:58.218470 | orchestrator | 2026-01-03 04:06:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:06:58.218571 | orchestrator | 2026-01-03 04:06:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:01.266839 | orchestrator | 2026-01-03 04:07:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:07:01.268117 | orchestrator | 2026-01-03 04:07:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:07:01.268172 | orchestrator | 2026-01-03 04:07:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:04.312892 | orchestrator | 2026-01-03 04:07:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:07:04.315008 | orchestrator | 2026-01-03 04:07:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:07:04.315073 | orchestrator | 2026-01-03 04:07:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:07.356140 | orchestrator | 2026-01-03 04:07:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:07:07.357308 | orchestrator | 2026-01-03 04:07:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:07:07.357395 | orchestrator | 2026-01-03 04:07:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:10.406280 | orchestrator | 2026-01-03 04:07:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:07:10.408883 | orchestrator | 2026-01-03 04:07:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:07:10.408958 | orchestrator | 2026-01-03 04:07:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:13.458869 | orchestrator | 2026-01-03 04:07:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:07:13.461572 | orchestrator | 2026-01-03 04:07:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:07:13.461662 | orchestrator | 2026-01-03 04:07:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:16.507110 | orchestrator | 2026-01-03 04:07:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:07:16.508427 | orchestrator | 2026-01-03 04:07:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:07:16.508503 | orchestrator | 2026-01-03 04:07:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:19.557075 | orchestrator | 2026-01-03 04:07:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:07:19.558818 | orchestrator | 2026-01-03 04:07:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:07:19.558857 | orchestrator | 2026-01-03 04:07:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:22.602911 | orchestrator | 2026-01-03 04:07:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:07:22.603876 | orchestrator | 2026-01-03 04:07:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:07:22.604029 | orchestrator | 2026-01-03 04:07:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:25.648410 | orchestrator | 2026-01-03 04:07:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:07:25.650371 | orchestrator | 2026-01-03 04:07:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:07:25.650435 | orchestrator | 2026-01-03 04:07:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:28.696779 | orchestrator | 2026-01-03 04:07:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:07:28.699149 | orchestrator | 2026-01-03 04:07:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:07:28.699218 | orchestrator | 2026-01-03 04:07:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:31.749051 | orchestrator | 2026-01-03 04:07:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:07:31.751149 | orchestrator | 2026-01-03 04:07:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:07:31.751275 | orchestrator | 2026-01-03 04:07:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:34.794411 | orchestrator | 2026-01-03 04:07:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:07:34.796075 | orchestrator | 2026-01-03 04:07:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:07:34.796132 | orchestrator | 2026-01-03 04:07:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:37.846854 | orchestrator | 2026-01-03 04:07:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:07:37.848256 | orchestrator | 2026-01-03 04:07:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:07:37.848353 | orchestrator | 2026-01-03 04:07:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:40.896514 | orchestrator | 2026-01-03 04:07:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:07:40.898488 | orchestrator | 2026-01-03 04:07:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:07:40.898583 | orchestrator | 2026-01-03 04:07:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:43.946201 | orchestrator | 2026-01-03 04:07:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:07:43.948120 | orchestrator | 2026-01-03 04:07:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:07:43.948163 | orchestrator | 2026-01-03 04:07:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:46.996353 | orchestrator | 2026-01-03 04:07:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:07:46.998973 | orchestrator | 2026-01-03 04:07:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:07:46.999034 | orchestrator | 2026-01-03 04:07:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:50.047500 | orchestrator | 2026-01-03 04:07:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:07:50.048260 | orchestrator | 2026-01-03 04:07:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:07:50.048303 | orchestrator | 2026-01-03 04:07:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:53.091036 | orchestrator | 2026-01-03 04:07:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:07:53.093170 | orchestrator | 2026-01-03 04:07:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:07:53.093202 | orchestrator | 2026-01-03 04:07:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:56.144839 | orchestrator | 2026-01-03 04:07:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:07:56.147003 | orchestrator | 2026-01-03 04:07:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:07:56.147064 | orchestrator | 2026-01-03 04:07:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:07:59.199767 | orchestrator | 2026-01-03 04:07:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:07:59.201150 | orchestrator | 2026-01-03 04:07:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:07:59.201177 | orchestrator | 2026-01-03 04:07:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:02.252928 | orchestrator | 2026-01-03 04:08:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:08:02.253839 | orchestrator | 2026-01-03 04:08:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:08:02.254690 | orchestrator | 2026-01-03 04:08:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:05.304144 | orchestrator | 2026-01-03 04:08:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:08:05.305930 | orchestrator | 2026-01-03 04:08:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:08:05.306127 | orchestrator | 2026-01-03 04:08:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:08.356089 | orchestrator | 2026-01-03 04:08:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:08:08.357854 | orchestrator | 2026-01-03 04:08:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:08:08.357891 | orchestrator | 2026-01-03 04:08:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:11.406603 | orchestrator | 2026-01-03 04:08:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:08:11.408853 | orchestrator | 2026-01-03 04:08:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:08:11.408904 | orchestrator | 2026-01-03 04:08:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:14.450485 | orchestrator | 2026-01-03 04:08:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:08:14.451533 | orchestrator | 2026-01-03 04:08:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:08:14.451578 | orchestrator | 2026-01-03 04:08:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:17.494265 | orchestrator | 2026-01-03 04:08:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:08:17.497339 | orchestrator | 2026-01-03 04:08:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:08:17.497713 | orchestrator | 2026-01-03 04:08:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:20.547735 | orchestrator | 2026-01-03 04:08:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:08:20.551090 | orchestrator | 2026-01-03 04:08:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:08:20.551177 | orchestrator | 2026-01-03 04:08:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:23.602201 | orchestrator | 2026-01-03 04:08:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:08:23.604094 | orchestrator | 2026-01-03 04:08:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:08:23.604190 | orchestrator | 2026-01-03 04:08:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:26.651198 | orchestrator | 2026-01-03 04:08:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:08:26.653063 | orchestrator | 2026-01-03 04:08:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:08:26.653108 | orchestrator | 2026-01-03 04:08:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:29.701555 | orchestrator | 2026-01-03 04:08:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:08:29.703546 | orchestrator | 2026-01-03 04:08:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:08:29.703634 | orchestrator | 2026-01-03 04:08:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:32.752187 | orchestrator | 2026-01-03 04:08:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:08:32.755011 | orchestrator | 2026-01-03 04:08:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:08:32.755154 | orchestrator | 2026-01-03 04:08:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:35.802363 | orchestrator | 2026-01-03 04:08:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:08:35.804454 | orchestrator | 2026-01-03 04:08:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:08:35.804511 | orchestrator | 2026-01-03 04:08:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:38.855121 | orchestrator | 2026-01-03 04:08:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:08:38.858510 | orchestrator | 2026-01-03 04:08:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:08:38.858612 | orchestrator | 2026-01-03 04:08:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:41.916182 | orchestrator | 2026-01-03 04:08:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:08:41.919229 | orchestrator | 2026-01-03 04:08:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:08:41.919368 | orchestrator | 2026-01-03 04:08:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:44.965846 | orchestrator | 2026-01-03 04:08:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:08:44.967199 | orchestrator | 2026-01-03 04:08:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:08:44.967303 | orchestrator | 2026-01-03 04:08:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:48.016529 | orchestrator | 2026-01-03 04:08:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:08:48.018343 | orchestrator | 2026-01-03 04:08:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:08:48.018429 | orchestrator | 2026-01-03 04:08:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:51.060648 | orchestrator | 2026-01-03 04:08:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:08:51.061293 | orchestrator | 2026-01-03 04:08:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:08:51.061339 | orchestrator | 2026-01-03 04:08:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:54.110266 | orchestrator | 2026-01-03 04:08:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:08:54.113015 | orchestrator | 2026-01-03 04:08:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:08:54.113238 | orchestrator | 2026-01-03 04:08:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:08:57.157062 | orchestrator | 2026-01-03 04:08:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:08:57.159545 | orchestrator | 2026-01-03 04:08:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:08:57.159598 | orchestrator | 2026-01-03 04:08:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:00.208872 | orchestrator | 2026-01-03 04:09:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:09:00.211290 | orchestrator | 2026-01-03 04:09:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:09:00.211356 | orchestrator | 2026-01-03 04:09:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:03.250563 | orchestrator | 2026-01-03 04:09:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:09:03.251520 | orchestrator | 2026-01-03 04:09:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:09:03.251551 | orchestrator | 2026-01-03 04:09:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:06.300730 | orchestrator | 2026-01-03 04:09:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:09:06.302301 | orchestrator | 2026-01-03 04:09:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:09:06.302355 | orchestrator | 2026-01-03 04:09:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:09.343227 | orchestrator | 2026-01-03 04:09:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:09:09.345503 | orchestrator | 2026-01-03 04:09:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:09:09.345550 | orchestrator | 2026-01-03 04:09:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:12.393394 | orchestrator | 2026-01-03 04:09:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:09:12.394963 | orchestrator | 2026-01-03 04:09:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:09:12.395035 | orchestrator | 2026-01-03 04:09:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:15.442786 | orchestrator | 2026-01-03 04:09:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:09:15.444863 | orchestrator | 2026-01-03 04:09:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:09:15.444915 | orchestrator | 2026-01-03 04:09:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:18.490291 | orchestrator | 2026-01-03 04:09:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:09:18.491591 | orchestrator | 2026-01-03 04:09:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:09:18.491750 | orchestrator | 2026-01-03 04:09:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:21.537213 | orchestrator | 2026-01-03 04:09:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:09:21.539424 | orchestrator | 2026-01-03 04:09:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:09:21.539466 | orchestrator | 2026-01-03 04:09:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:24.585123 | orchestrator | 2026-01-03 04:09:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:09:24.585816 | orchestrator | 2026-01-03 04:09:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:09:24.585919 | orchestrator | 2026-01-03 04:09:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:27.632876 | orchestrator | 2026-01-03 04:09:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:09:27.634705 | orchestrator | 2026-01-03 04:09:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:09:27.634742 | orchestrator | 2026-01-03 04:09:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:30.683902 | orchestrator | 2026-01-03 04:09:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:09:30.686413 | orchestrator | 2026-01-03 04:09:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:09:30.686572 | orchestrator | 2026-01-03 04:09:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:33.737158 | orchestrator | 2026-01-03 04:09:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:09:33.738265 | orchestrator | 2026-01-03 04:09:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:09:33.738423 | orchestrator | 2026-01-03 04:09:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:36.790908 | orchestrator | 2026-01-03 04:09:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:09:36.793907 | orchestrator | 2026-01-03 04:09:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:09:36.794097 | orchestrator | 2026-01-03 04:09:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:39.839537 | orchestrator | 2026-01-03 04:09:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:09:39.841172 | orchestrator | 2026-01-03 04:09:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:09:39.841241 | orchestrator | 2026-01-03 04:09:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:42.887168 | orchestrator | 2026-01-03 04:09:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:09:42.888907 | orchestrator | 2026-01-03 04:09:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:09:42.888987 | orchestrator | 2026-01-03 04:09:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:45.936065 | orchestrator | 2026-01-03 04:09:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:09:45.938359 | orchestrator | 2026-01-03 04:09:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:09:45.938507 | orchestrator | 2026-01-03 04:09:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:48.985958 | orchestrator | 2026-01-03 04:09:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:09:48.987393 | orchestrator | 2026-01-03 04:09:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:09:48.987445 | orchestrator | 2026-01-03 04:09:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:52.034877 | orchestrator | 2026-01-03 04:09:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:09:52.036449 | orchestrator | 2026-01-03 04:09:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:09:52.036511 | orchestrator | 2026-01-03 04:09:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:55.080587 | orchestrator | 2026-01-03 04:09:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:09:55.081911 | orchestrator | 2026-01-03 04:09:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:09:55.081961 | orchestrator | 2026-01-03 04:09:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:09:58.117881 | orchestrator | 2026-01-03 04:09:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:09:58.120134 | orchestrator | 2026-01-03 04:09:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:09:58.120204 | orchestrator | 2026-01-03 04:09:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:01.166748 | orchestrator | 2026-01-03 04:10:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:10:01.169254 | orchestrator | 2026-01-03 04:10:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:10:01.169295 | orchestrator | 2026-01-03 04:10:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:04.220953 | orchestrator | 2026-01-03 04:10:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:10:04.224712 | orchestrator | 2026-01-03 04:10:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:10:04.224774 | orchestrator | 2026-01-03 04:10:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:07.278484 | orchestrator | 2026-01-03 04:10:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:10:07.280073 | orchestrator | 2026-01-03 04:10:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:10:07.280107 | orchestrator | 2026-01-03 04:10:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:10.325784 | orchestrator | 2026-01-03 04:10:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:10:10.326870 | orchestrator | 2026-01-03 04:10:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:10:10.327198 | orchestrator | 2026-01-03 04:10:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:13.371328 | orchestrator | 2026-01-03 04:10:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:10:13.372703 | orchestrator | 2026-01-03 04:10:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:10:13.372755 | orchestrator | 2026-01-03 04:10:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:16.417040 | orchestrator | 2026-01-03 04:10:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:10:16.418853 | orchestrator | 2026-01-03 04:10:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:10:16.418928 | orchestrator | 2026-01-03 04:10:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:19.461456 | orchestrator | 2026-01-03 04:10:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:10:19.463886 | orchestrator | 2026-01-03 04:10:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:10:19.463987 | orchestrator | 2026-01-03 04:10:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:22.511514 | orchestrator | 2026-01-03 04:10:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:10:22.513616 | orchestrator | 2026-01-03 04:10:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:10:22.513757 | orchestrator | 2026-01-03 04:10:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:25.553402 | orchestrator | 2026-01-03 04:10:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:10:25.554862 | orchestrator | 2026-01-03 04:10:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:10:25.554952 | orchestrator | 2026-01-03 04:10:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:28.596825 | orchestrator | 2026-01-03 04:10:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:10:28.598218 | orchestrator | 2026-01-03 04:10:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:10:28.598306 | orchestrator | 2026-01-03 04:10:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:31.643872 | orchestrator | 2026-01-03 04:10:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:10:31.645791 | orchestrator | 2026-01-03 04:10:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:10:31.645841 | orchestrator | 2026-01-03 04:10:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:34.688911 | orchestrator | 2026-01-03 04:10:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:10:34.690821 | orchestrator | 2026-01-03 04:10:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:10:34.690873 | orchestrator | 2026-01-03 04:10:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:37.738682 | orchestrator | 2026-01-03 04:10:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:10:37.741243 | orchestrator | 2026-01-03 04:10:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:10:37.741293 | orchestrator | 2026-01-03 04:10:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:40.792369 | orchestrator | 2026-01-03 04:10:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:10:40.794980 | orchestrator | 2026-01-03 04:10:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:10:40.795056 | orchestrator | 2026-01-03 04:10:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:43.844310 | orchestrator | 2026-01-03 04:10:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:10:43.846514 | orchestrator | 2026-01-03 04:10:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:10:43.846557 | orchestrator | 2026-01-03 04:10:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:46.889091 | orchestrator | 2026-01-03 04:10:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:10:46.891740 | orchestrator | 2026-01-03 04:10:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:10:46.891960 | orchestrator | 2026-01-03 04:10:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:49.943195 | orchestrator | 2026-01-03 04:10:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:10:49.944610 | orchestrator | 2026-01-03 04:10:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:10:49.944797 | orchestrator | 2026-01-03 04:10:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:52.993058 | orchestrator | 2026-01-03 04:10:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:10:52.994852 | orchestrator | 2026-01-03 04:10:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:10:52.994953 | orchestrator | 2026-01-03 04:10:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:56.038125 | orchestrator | 2026-01-03 04:10:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:10:56.040016 | orchestrator | 2026-01-03 04:10:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:10:56.040062 | orchestrator | 2026-01-03 04:10:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:10:59.082996 | orchestrator | 2026-01-03 04:10:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:10:59.084574 | orchestrator | 2026-01-03 04:10:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:10:59.084685 | orchestrator | 2026-01-03 04:10:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:02.134527 | orchestrator | 2026-01-03 04:11:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:11:02.137512 | orchestrator | 2026-01-03 04:11:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:11:02.137586 | orchestrator | 2026-01-03 04:11:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:05.190092 | orchestrator | 2026-01-03 04:11:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:11:05.191980 | orchestrator | 2026-01-03 04:11:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:11:05.192036 | orchestrator | 2026-01-03 04:11:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:08.239495 | orchestrator | 2026-01-03 04:11:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:11:08.241078 | orchestrator | 2026-01-03 04:11:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:11:08.241130 | orchestrator | 2026-01-03 04:11:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:11.286736 | orchestrator | 2026-01-03 04:11:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:11:11.287516 | orchestrator | 2026-01-03 04:11:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:11:11.287548 | orchestrator | 2026-01-03 04:11:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:14.336997 | orchestrator | 2026-01-03 04:11:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:11:14.339289 | orchestrator | 2026-01-03 04:11:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:11:14.339629 | orchestrator | 2026-01-03 04:11:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:17.388793 | orchestrator | 2026-01-03 04:11:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:11:17.390532 | orchestrator | 2026-01-03 04:11:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:11:17.390651 | orchestrator | 2026-01-03 04:11:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:20.435317 | orchestrator | 2026-01-03 04:11:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:11:20.437152 | orchestrator | 2026-01-03 04:11:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:11:20.437255 | orchestrator | 2026-01-03 04:11:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:23.477422 | orchestrator | 2026-01-03 04:11:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:11:23.479377 | orchestrator | 2026-01-03 04:11:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:11:23.479578 | orchestrator | 2026-01-03 04:11:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:26.526064 | orchestrator | 2026-01-03 04:11:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:11:26.527146 | orchestrator | 2026-01-03 04:11:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:11:26.527421 | orchestrator | 2026-01-03 04:11:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:29.573830 | orchestrator | 2026-01-03 04:11:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:11:29.574974 | orchestrator | 2026-01-03 04:11:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:11:29.575009 | orchestrator | 2026-01-03 04:11:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:32.620149 | orchestrator | 2026-01-03 04:11:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:11:32.622839 | orchestrator | 2026-01-03 04:11:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:11:32.622969 | orchestrator | 2026-01-03 04:11:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:35.671953 | orchestrator | 2026-01-03 04:11:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:11:35.673994 | orchestrator | 2026-01-03 04:11:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:11:35.674102 | orchestrator | 2026-01-03 04:11:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:38.724817 | orchestrator | 2026-01-03 04:11:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:11:38.728593 | orchestrator | 2026-01-03 04:11:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:11:38.730155 | orchestrator | 2026-01-03 04:11:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:41.786215 | orchestrator | 2026-01-03 04:11:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:11:41.787977 | orchestrator | 2026-01-03 04:11:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:11:41.788287 | orchestrator | 2026-01-03 04:11:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:44.838080 | orchestrator | 2026-01-03 04:11:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:11:44.840132 | orchestrator | 2026-01-03 04:11:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:11:44.840198 | orchestrator | 2026-01-03 04:11:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:47.893229 | orchestrator | 2026-01-03 04:11:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:11:47.894434 | orchestrator | 2026-01-03 04:11:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:11:47.894465 | orchestrator | 2026-01-03 04:11:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:50.938399 | orchestrator | 2026-01-03 04:11:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:11:50.940881 | orchestrator | 2026-01-03 04:11:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:11:50.940944 | orchestrator | 2026-01-03 04:11:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:53.995750 | orchestrator | 2026-01-03 04:11:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:11:53.997813 | orchestrator | 2026-01-03 04:11:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:11:53.997852 | orchestrator | 2026-01-03 04:11:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:11:57.042944 | orchestrator | 2026-01-03 04:11:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:11:57.045853 | orchestrator | 2026-01-03 04:11:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:11:57.045919 | orchestrator | 2026-01-03 04:11:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:00.086309 | orchestrator | 2026-01-03 04:12:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:12:00.088957 | orchestrator | 2026-01-03 04:12:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:12:00.089028 | orchestrator | 2026-01-03 04:12:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:03.141077 | orchestrator | 2026-01-03 04:12:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:12:03.143299 | orchestrator | 2026-01-03 04:12:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:12:03.143378 | orchestrator | 2026-01-03 04:12:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:06.198894 | orchestrator | 2026-01-03 04:12:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:12:06.200336 | orchestrator | 2026-01-03 04:12:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:12:06.200363 | orchestrator | 2026-01-03 04:12:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:09.248345 | orchestrator | 2026-01-03 04:12:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:12:09.249936 | orchestrator | 2026-01-03 04:12:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:12:09.250522 | orchestrator | 2026-01-03 04:12:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:12.291545 | orchestrator | 2026-01-03 04:12:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:12:12.294780 | orchestrator | 2026-01-03 04:12:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:12:12.294853 | orchestrator | 2026-01-03 04:12:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:15.342531 | orchestrator | 2026-01-03 04:12:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:12:15.345155 | orchestrator | 2026-01-03 04:12:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:12:15.345264 | orchestrator | 2026-01-03 04:12:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:18.391391 | orchestrator | 2026-01-03 04:12:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:12:18.393212 | orchestrator | 2026-01-03 04:12:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:12:18.393310 | orchestrator | 2026-01-03 04:12:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:21.438258 | orchestrator | 2026-01-03 04:12:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:12:21.439300 | orchestrator | 2026-01-03 04:12:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:12:21.439534 | orchestrator | 2026-01-03 04:12:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:24.487341 | orchestrator | 2026-01-03 04:12:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:12:24.489274 | orchestrator | 2026-01-03 04:12:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:12:24.489345 | orchestrator | 2026-01-03 04:12:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:27.537510 | orchestrator | 2026-01-03 04:12:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:12:27.539341 | orchestrator | 2026-01-03 04:12:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:12:27.539421 | orchestrator | 2026-01-03 04:12:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:30.583492 | orchestrator | 2026-01-03 04:12:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:12:30.585424 | orchestrator | 2026-01-03 04:12:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:12:30.585484 | orchestrator | 2026-01-03 04:12:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:33.632213 | orchestrator | 2026-01-03 04:12:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:12:33.633962 | orchestrator | 2026-01-03 04:12:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:12:33.634091 | orchestrator | 2026-01-03 04:12:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:36.676736 | orchestrator | 2026-01-03 04:12:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:12:36.677764 | orchestrator | 2026-01-03 04:12:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:12:36.677806 | orchestrator | 2026-01-03 04:12:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:39.727803 | orchestrator | 2026-01-03 04:12:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:12:39.729142 | orchestrator | 2026-01-03 04:12:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:12:39.729219 | orchestrator | 2026-01-03 04:12:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:42.781962 | orchestrator | 2026-01-03 04:12:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:12:42.783719 | orchestrator | 2026-01-03 04:12:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:12:42.783859 | orchestrator | 2026-01-03 04:12:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:45.838952 | orchestrator | 2026-01-03 04:12:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:12:45.840401 | orchestrator | 2026-01-03 04:12:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:12:45.840427 | orchestrator | 2026-01-03 04:12:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:48.892993 | orchestrator | 2026-01-03 04:12:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:12:48.894512 | orchestrator | 2026-01-03 04:12:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:12:48.894552 | orchestrator | 2026-01-03 04:12:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:51.945303 | orchestrator | 2026-01-03 04:12:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:12:51.947078 | orchestrator | 2026-01-03 04:12:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:12:51.947156 | orchestrator | 2026-01-03 04:12:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:55.004340 | orchestrator | 2026-01-03 04:12:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:12:55.006740 | orchestrator | 2026-01-03 04:12:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:12:55.007005 | orchestrator | 2026-01-03 04:12:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:12:58.058351 | orchestrator | 2026-01-03 04:12:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:12:58.059442 | orchestrator | 2026-01-03 04:12:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:12:58.059484 | orchestrator | 2026-01-03 04:12:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:01.102561 | orchestrator | 2026-01-03 04:13:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:13:01.105971 | orchestrator | 2026-01-03 04:13:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:13:01.106119 | orchestrator | 2026-01-03 04:13:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:04.158091 | orchestrator | 2026-01-03 04:13:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:13:04.159103 | orchestrator | 2026-01-03 04:13:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:13:04.159148 | orchestrator | 2026-01-03 04:13:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:07.209263 | orchestrator | 2026-01-03 04:13:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:13:07.210778 | orchestrator | 2026-01-03 04:13:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:13:07.210815 | orchestrator | 2026-01-03 04:13:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:10.255045 | orchestrator | 2026-01-03 04:13:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:13:10.255821 | orchestrator | 2026-01-03 04:13:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:13:10.255847 | orchestrator | 2026-01-03 04:13:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:13.309230 | orchestrator | 2026-01-03 04:13:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:13:13.312561 | orchestrator | 2026-01-03 04:13:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:13:13.312719 | orchestrator | 2026-01-03 04:13:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:16.372148 | orchestrator | 2026-01-03 04:13:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:13:16.373480 | orchestrator | 2026-01-03 04:13:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:13:16.373585 | orchestrator | 2026-01-03 04:13:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:19.432290 | orchestrator | 2026-01-03 04:13:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:13:19.432393 | orchestrator | 2026-01-03 04:13:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:13:19.432410 | orchestrator | 2026-01-03 04:13:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:22.478875 | orchestrator | 2026-01-03 04:13:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:13:22.482205 | orchestrator | 2026-01-03 04:13:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:13:22.482288 | orchestrator | 2026-01-03 04:13:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:25.537256 | orchestrator | 2026-01-03 04:13:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:13:25.538839 | orchestrator | 2026-01-03 04:13:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:13:25.538897 | orchestrator | 2026-01-03 04:13:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:28.589318 | orchestrator | 2026-01-03 04:13:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:13:28.593498 | orchestrator | 2026-01-03 04:13:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:13:28.593563 | orchestrator | 2026-01-03 04:13:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:31.638933 | orchestrator | 2026-01-03 04:13:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:13:31.640881 | orchestrator | 2026-01-03 04:13:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:13:31.640953 | orchestrator | 2026-01-03 04:13:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:34.691779 | orchestrator | 2026-01-03 04:13:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:13:34.694277 | orchestrator | 2026-01-03 04:13:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:13:34.694329 | orchestrator | 2026-01-03 04:13:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:37.740897 | orchestrator | 2026-01-03 04:13:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:13:37.742756 | orchestrator | 2026-01-03 04:13:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:13:37.742793 | orchestrator | 2026-01-03 04:13:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:40.797993 | orchestrator | 2026-01-03 04:13:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:13:40.800754 | orchestrator | 2026-01-03 04:13:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:13:40.800795 | orchestrator | 2026-01-03 04:13:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:43.847648 | orchestrator | 2026-01-03 04:13:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:13:43.849916 | orchestrator | 2026-01-03 04:13:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:13:43.850394 | orchestrator | 2026-01-03 04:13:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:46.894245 | orchestrator | 2026-01-03 04:13:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:13:46.896296 | orchestrator | 2026-01-03 04:13:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:13:46.896390 | orchestrator | 2026-01-03 04:13:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:49.940731 | orchestrator | 2026-01-03 04:13:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:13:49.941893 | orchestrator | 2026-01-03 04:13:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:13:49.941913 | orchestrator | 2026-01-03 04:13:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:52.994205 | orchestrator | 2026-01-03 04:13:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:13:52.994678 | orchestrator | 2026-01-03 04:13:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:13:52.995430 | orchestrator | 2026-01-03 04:13:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:56.036054 | orchestrator | 2026-01-03 04:13:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:13:56.037910 | orchestrator | 2026-01-03 04:13:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:13:56.037965 | orchestrator | 2026-01-03 04:13:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:13:59.083581 | orchestrator | 2026-01-03 04:13:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:13:59.084841 | orchestrator | 2026-01-03 04:13:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:13:59.085119 | orchestrator | 2026-01-03 04:13:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:02.134464 | orchestrator | 2026-01-03 04:14:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:14:02.136153 | orchestrator | 2026-01-03 04:14:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:14:02.136206 | orchestrator | 2026-01-03 04:14:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:05.188339 | orchestrator | 2026-01-03 04:14:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:14:05.191494 | orchestrator | 2026-01-03 04:14:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:14:05.191571 | orchestrator | 2026-01-03 04:14:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:08.239572 | orchestrator | 2026-01-03 04:14:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:14:08.241927 | orchestrator | 2026-01-03 04:14:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:14:08.242005 | orchestrator | 2026-01-03 04:14:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:11.292198 | orchestrator | 2026-01-03 04:14:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:14:11.294140 | orchestrator | 2026-01-03 04:14:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:14:11.294179 | orchestrator | 2026-01-03 04:14:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:14.339332 | orchestrator | 2026-01-03 04:14:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:14:14.343073 | orchestrator | 2026-01-03 04:14:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:14:14.343907 | orchestrator | 2026-01-03 04:14:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:17.383135 | orchestrator | 2026-01-03 04:14:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:14:17.385713 | orchestrator | 2026-01-03 04:14:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:14:17.385769 | orchestrator | 2026-01-03 04:14:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:20.429240 | orchestrator | 2026-01-03 04:14:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:14:20.429910 | orchestrator | 2026-01-03 04:14:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:14:20.429931 | orchestrator | 2026-01-03 04:14:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:23.471288 | orchestrator | 2026-01-03 04:14:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:14:23.472633 | orchestrator | 2026-01-03 04:14:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:14:23.472691 | orchestrator | 2026-01-03 04:14:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:26.527335 | orchestrator | 2026-01-03 04:14:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:14:26.529426 | orchestrator | 2026-01-03 04:14:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:14:26.529474 | orchestrator | 2026-01-03 04:14:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:29.575164 | orchestrator | 2026-01-03 04:14:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:14:29.577011 | orchestrator | 2026-01-03 04:14:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:14:29.577167 | orchestrator | 2026-01-03 04:14:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:32.623947 | orchestrator | 2026-01-03 04:14:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:14:32.625629 | orchestrator | 2026-01-03 04:14:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:14:32.625683 | orchestrator | 2026-01-03 04:14:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:35.674936 | orchestrator | 2026-01-03 04:14:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:14:35.676281 | orchestrator | 2026-01-03 04:14:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:14:35.676434 | orchestrator | 2026-01-03 04:14:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:38.724633 | orchestrator | 2026-01-03 04:14:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:14:38.726182 | orchestrator | 2026-01-03 04:14:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:14:38.726234 | orchestrator | 2026-01-03 04:14:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:41.776472 | orchestrator | 2026-01-03 04:14:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:14:41.777054 | orchestrator | 2026-01-03 04:14:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:14:41.777094 | orchestrator | 2026-01-03 04:14:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:44.828171 | orchestrator | 2026-01-03 04:14:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:14:44.829143 | orchestrator | 2026-01-03 04:14:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:14:44.829187 | orchestrator | 2026-01-03 04:14:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:47.877780 | orchestrator | 2026-01-03 04:14:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:14:47.878011 | orchestrator | 2026-01-03 04:14:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:14:47.878130 | orchestrator | 2026-01-03 04:14:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:50.920183 | orchestrator | 2026-01-03 04:14:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:14:50.921759 | orchestrator | 2026-01-03 04:14:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:14:50.921828 | orchestrator | 2026-01-03 04:14:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:53.964794 | orchestrator | 2026-01-03 04:14:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:14:53.967011 | orchestrator | 2026-01-03 04:14:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:14:53.967063 | orchestrator | 2026-01-03 04:14:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:14:57.016758 | orchestrator | 2026-01-03 04:14:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:14:57.018437 | orchestrator | 2026-01-03 04:14:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:14:57.018490 | orchestrator | 2026-01-03 04:14:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:00.056671 | orchestrator | 2026-01-03 04:15:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:15:00.058971 | orchestrator | 2026-01-03 04:15:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:15:00.059024 | orchestrator | 2026-01-03 04:15:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:03.106255 | orchestrator | 2026-01-03 04:15:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:15:03.108371 | orchestrator | 2026-01-03 04:15:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:15:03.108723 | orchestrator | 2026-01-03 04:15:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:06.155069 | orchestrator | 2026-01-03 04:15:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:15:06.157977 | orchestrator | 2026-01-03 04:15:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:15:06.158832 | orchestrator | 2026-01-03 04:15:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:09.205526 | orchestrator | 2026-01-03 04:15:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:15:09.207145 | orchestrator | 2026-01-03 04:15:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:15:09.207230 | orchestrator | 2026-01-03 04:15:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:12.258767 | orchestrator | 2026-01-03 04:15:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:15:12.259662 | orchestrator | 2026-01-03 04:15:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:15:12.259752 | orchestrator | 2026-01-03 04:15:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:15.305997 | orchestrator | 2026-01-03 04:15:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:15:15.310966 | orchestrator | 2026-01-03 04:15:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:15:15.311705 | orchestrator | 2026-01-03 04:15:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:18.360976 | orchestrator | 2026-01-03 04:15:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:15:18.362552 | orchestrator | 2026-01-03 04:15:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:15:18.362755 | orchestrator | 2026-01-03 04:15:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:21.403912 | orchestrator | 2026-01-03 04:15:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:15:21.406093 | orchestrator | 2026-01-03 04:15:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:15:21.406213 | orchestrator | 2026-01-03 04:15:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:24.449503 | orchestrator | 2026-01-03 04:15:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:15:24.450972 | orchestrator | 2026-01-03 04:15:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:15:24.451391 | orchestrator | 2026-01-03 04:15:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:27.505895 | orchestrator | 2026-01-03 04:15:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:15:27.507178 | orchestrator | 2026-01-03 04:15:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:15:27.507228 | orchestrator | 2026-01-03 04:15:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:30.550542 | orchestrator | 2026-01-03 04:15:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:15:30.554207 | orchestrator | 2026-01-03 04:15:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:15:30.554266 | orchestrator | 2026-01-03 04:15:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:33.603118 | orchestrator | 2026-01-03 04:15:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:15:33.606285 | orchestrator | 2026-01-03 04:15:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:15:33.606349 | orchestrator | 2026-01-03 04:15:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:36.657284 | orchestrator | 2026-01-03 04:15:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:15:36.659179 | orchestrator | 2026-01-03 04:15:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:15:36.659220 | orchestrator | 2026-01-03 04:15:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:39.707406 | orchestrator | 2026-01-03 04:15:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:15:39.709447 | orchestrator | 2026-01-03 04:15:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:15:39.709535 | orchestrator | 2026-01-03 04:15:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:42.762092 | orchestrator | 2026-01-03 04:15:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:15:42.764824 | orchestrator | 2026-01-03 04:15:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:15:42.764906 | orchestrator | 2026-01-03 04:15:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:45.811671 | orchestrator | 2026-01-03 04:15:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:15:45.813505 | orchestrator | 2026-01-03 04:15:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:15:45.813651 | orchestrator | 2026-01-03 04:15:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:48.864325 | orchestrator | 2026-01-03 04:15:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:15:48.866747 | orchestrator | 2026-01-03 04:15:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:15:48.866841 | orchestrator | 2026-01-03 04:15:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:51.916044 | orchestrator | 2026-01-03 04:15:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:15:51.917356 | orchestrator | 2026-01-03 04:15:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:15:51.917397 | orchestrator | 2026-01-03 04:15:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:54.967970 | orchestrator | 2026-01-03 04:15:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:15:54.970115 | orchestrator | 2026-01-03 04:15:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:15:54.970267 | orchestrator | 2026-01-03 04:15:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:15:58.021825 | orchestrator | 2026-01-03 04:15:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:15:58.023138 | orchestrator | 2026-01-03 04:15:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:15:58.023246 | orchestrator | 2026-01-03 04:15:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:01.073524 | orchestrator | 2026-01-03 04:16:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:16:01.075042 | orchestrator | 2026-01-03 04:16:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:16:01.075247 | orchestrator | 2026-01-03 04:16:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:04.113936 | orchestrator | 2026-01-03 04:16:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:16:04.115535 | orchestrator | 2026-01-03 04:16:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:16:04.115619 | orchestrator | 2026-01-03 04:16:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:07.160107 | orchestrator | 2026-01-03 04:16:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:16:07.161726 | orchestrator | 2026-01-03 04:16:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:16:07.161970 | orchestrator | 2026-01-03 04:16:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:10.204095 | orchestrator | 2026-01-03 04:16:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:16:10.205499 | orchestrator | 2026-01-03 04:16:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:16:10.205677 | orchestrator | 2026-01-03 04:16:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:13.253027 | orchestrator | 2026-01-03 04:16:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:16:13.254945 | orchestrator | 2026-01-03 04:16:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:16:13.255023 | orchestrator | 2026-01-03 04:16:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:16.295725 | orchestrator | 2026-01-03 04:16:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:16:16.297452 | orchestrator | 2026-01-03 04:16:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:16:16.297496 | orchestrator | 2026-01-03 04:16:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:19.344136 | orchestrator | 2026-01-03 04:16:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:16:19.346291 | orchestrator | 2026-01-03 04:16:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:16:19.346439 | orchestrator | 2026-01-03 04:16:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:22.390956 | orchestrator | 2026-01-03 04:16:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:16:22.392460 | orchestrator | 2026-01-03 04:16:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:16:22.392515 | orchestrator | 2026-01-03 04:16:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:25.432957 | orchestrator | 2026-01-03 04:16:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:16:25.433733 | orchestrator | 2026-01-03 04:16:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:16:25.433774 | orchestrator | 2026-01-03 04:16:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:28.484024 | orchestrator | 2026-01-03 04:16:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:16:28.485969 | orchestrator | 2026-01-03 04:16:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:16:28.486226 | orchestrator | 2026-01-03 04:16:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:31.536463 | orchestrator | 2026-01-03 04:16:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:16:31.537999 | orchestrator | 2026-01-03 04:16:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:16:31.538155 | orchestrator | 2026-01-03 04:16:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:34.580860 | orchestrator | 2026-01-03 04:16:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:16:34.583213 | orchestrator | 2026-01-03 04:16:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:16:34.583414 | orchestrator | 2026-01-03 04:16:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:37.635236 | orchestrator | 2026-01-03 04:16:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:16:37.636915 | orchestrator | 2026-01-03 04:16:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:16:37.637076 | orchestrator | 2026-01-03 04:16:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:40.684317 | orchestrator | 2026-01-03 04:16:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:16:40.685803 | orchestrator | 2026-01-03 04:16:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:16:40.685860 | orchestrator | 2026-01-03 04:16:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:43.733370 | orchestrator | 2026-01-03 04:16:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:16:43.735349 | orchestrator | 2026-01-03 04:16:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:16:43.735610 | orchestrator | 2026-01-03 04:16:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:46.783813 | orchestrator | 2026-01-03 04:16:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:16:46.784571 | orchestrator | 2026-01-03 04:16:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:16:46.784690 | orchestrator | 2026-01-03 04:16:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:49.827798 | orchestrator | 2026-01-03 04:16:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:16:49.829948 | orchestrator | 2026-01-03 04:16:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:16:49.830070 | orchestrator | 2026-01-03 04:16:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:52.874150 | orchestrator | 2026-01-03 04:16:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:16:52.875757 | orchestrator | 2026-01-03 04:16:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:16:52.875856 | orchestrator | 2026-01-03 04:16:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:55.921839 | orchestrator | 2026-01-03 04:16:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:16:55.923395 | orchestrator | 2026-01-03 04:16:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:16:55.923716 | orchestrator | 2026-01-03 04:16:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:16:58.973797 | orchestrator | 2026-01-03 04:16:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:16:58.975988 | orchestrator | 2026-01-03 04:16:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:16:58.976048 | orchestrator | 2026-01-03 04:16:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:02.026129 | orchestrator | 2026-01-03 04:17:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:17:02.028159 | orchestrator | 2026-01-03 04:17:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:17:02.028809 | orchestrator | 2026-01-03 04:17:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:05.075271 | orchestrator | 2026-01-03 04:17:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:17:05.078286 | orchestrator | 2026-01-03 04:17:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:17:05.078461 | orchestrator | 2026-01-03 04:17:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:08.125218 | orchestrator | 2026-01-03 04:17:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:17:08.125961 | orchestrator | 2026-01-03 04:17:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:17:08.126086 | orchestrator | 2026-01-03 04:17:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:11.176713 | orchestrator | 2026-01-03 04:17:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:17:11.177688 | orchestrator | 2026-01-03 04:17:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:17:11.177754 | orchestrator | 2026-01-03 04:17:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:14.220708 | orchestrator | 2026-01-03 04:17:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:17:14.221500 | orchestrator | 2026-01-03 04:17:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:17:14.221557 | orchestrator | 2026-01-03 04:17:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:17.274231 | orchestrator | 2026-01-03 04:17:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:17:17.276189 | orchestrator | 2026-01-03 04:17:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:17:17.276259 | orchestrator | 2026-01-03 04:17:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:20.320329 | orchestrator | 2026-01-03 04:17:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:17:20.320867 | orchestrator | 2026-01-03 04:17:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:17:20.320894 | orchestrator | 2026-01-03 04:17:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:23.369619 | orchestrator | 2026-01-03 04:17:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:17:23.371230 | orchestrator | 2026-01-03 04:17:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:17:23.371289 | orchestrator | 2026-01-03 04:17:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:26.413025 | orchestrator | 2026-01-03 04:17:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:17:26.415705 | orchestrator | 2026-01-03 04:17:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:17:26.415753 | orchestrator | 2026-01-03 04:17:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:29.462918 | orchestrator | 2026-01-03 04:17:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:17:29.466162 | orchestrator | 2026-01-03 04:17:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:17:29.466224 | orchestrator | 2026-01-03 04:17:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:32.510310 | orchestrator | 2026-01-03 04:17:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:17:32.511757 | orchestrator | 2026-01-03 04:17:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:17:32.511808 | orchestrator | 2026-01-03 04:17:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:35.554815 | orchestrator | 2026-01-03 04:17:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:17:35.556607 | orchestrator | 2026-01-03 04:17:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:17:35.556655 | orchestrator | 2026-01-03 04:17:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:38.607433 | orchestrator | 2026-01-03 04:17:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:17:38.610344 | orchestrator | 2026-01-03 04:17:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:17:38.610421 | orchestrator | 2026-01-03 04:17:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:41.660555 | orchestrator | 2026-01-03 04:17:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:17:41.661884 | orchestrator | 2026-01-03 04:17:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:17:41.661993 | orchestrator | 2026-01-03 04:17:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:44.714492 | orchestrator | 2026-01-03 04:17:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:17:44.716346 | orchestrator | 2026-01-03 04:17:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:17:44.716544 | orchestrator | 2026-01-03 04:17:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:47.766746 | orchestrator | 2026-01-03 04:17:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:17:47.769346 | orchestrator | 2026-01-03 04:17:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:17:47.769383 | orchestrator | 2026-01-03 04:17:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:50.820143 | orchestrator | 2026-01-03 04:17:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:17:50.821812 | orchestrator | 2026-01-03 04:17:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:17:50.821854 | orchestrator | 2026-01-03 04:17:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:53.869019 | orchestrator | 2026-01-03 04:17:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:17:53.869840 | orchestrator | 2026-01-03 04:17:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:17:53.869879 | orchestrator | 2026-01-03 04:17:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:56.916138 | orchestrator | 2026-01-03 04:17:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:17:56.917695 | orchestrator | 2026-01-03 04:17:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:17:56.917748 | orchestrator | 2026-01-03 04:17:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:17:59.965303 | orchestrator | 2026-01-03 04:17:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:17:59.967056 | orchestrator | 2026-01-03 04:17:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:17:59.967083 | orchestrator | 2026-01-03 04:17:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:03.014329 | orchestrator | 2026-01-03 04:18:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:18:03.016193 | orchestrator | 2026-01-03 04:18:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:18:03.016260 | orchestrator | 2026-01-03 04:18:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:06.064884 | orchestrator | 2026-01-03 04:18:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:18:06.066800 | orchestrator | 2026-01-03 04:18:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:18:06.066909 | orchestrator | 2026-01-03 04:18:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:09.113335 | orchestrator | 2026-01-03 04:18:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:18:09.115173 | orchestrator | 2026-01-03 04:18:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:18:09.115433 | orchestrator | 2026-01-03 04:18:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:12.154492 | orchestrator | 2026-01-03 04:18:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:18:12.157155 | orchestrator | 2026-01-03 04:18:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:18:12.157220 | orchestrator | 2026-01-03 04:18:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:15.205386 | orchestrator | 2026-01-03 04:18:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:18:15.206485 | orchestrator | 2026-01-03 04:18:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:18:15.206539 | orchestrator | 2026-01-03 04:18:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:18.254452 | orchestrator | 2026-01-03 04:18:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:18:18.256089 | orchestrator | 2026-01-03 04:18:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:18:18.256224 | orchestrator | 2026-01-03 04:18:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:21.295336 | orchestrator | 2026-01-03 04:18:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:18:21.297106 | orchestrator | 2026-01-03 04:18:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:18:21.297155 | orchestrator | 2026-01-03 04:18:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:24.346097 | orchestrator | 2026-01-03 04:18:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:18:24.346906 | orchestrator | 2026-01-03 04:18:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:18:24.346938 | orchestrator | 2026-01-03 04:18:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:27.390417 | orchestrator | 2026-01-03 04:18:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:18:27.391396 | orchestrator | 2026-01-03 04:18:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:18:27.391443 | orchestrator | 2026-01-03 04:18:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:30.431906 | orchestrator | 2026-01-03 04:18:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:18:30.434069 | orchestrator | 2026-01-03 04:18:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:18:30.434092 | orchestrator | 2026-01-03 04:18:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:33.486425 | orchestrator | 2026-01-03 04:18:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:18:33.488430 | orchestrator | 2026-01-03 04:18:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:18:33.488771 | orchestrator | 2026-01-03 04:18:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:36.533933 | orchestrator | 2026-01-03 04:18:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:18:36.536164 | orchestrator | 2026-01-03 04:18:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:18:36.536213 | orchestrator | 2026-01-03 04:18:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:39.579439 | orchestrator | 2026-01-03 04:18:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:18:39.581751 | orchestrator | 2026-01-03 04:18:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:18:39.581994 | orchestrator | 2026-01-03 04:18:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:42.631313 | orchestrator | 2026-01-03 04:18:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:18:42.632689 | orchestrator | 2026-01-03 04:18:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:18:42.632746 | orchestrator | 2026-01-03 04:18:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:45.676743 | orchestrator | 2026-01-03 04:18:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:18:45.678376 | orchestrator | 2026-01-03 04:18:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:18:45.678421 | orchestrator | 2026-01-03 04:18:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:48.734281 | orchestrator | 2026-01-03 04:18:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:18:48.737061 | orchestrator | 2026-01-03 04:18:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:18:48.737176 | orchestrator | 2026-01-03 04:18:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:51.786645 | orchestrator | 2026-01-03 04:18:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:18:51.788932 | orchestrator | 2026-01-03 04:18:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:18:51.788993 | orchestrator | 2026-01-03 04:18:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:54.840164 | orchestrator | 2026-01-03 04:18:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:18:54.842938 | orchestrator | 2026-01-03 04:18:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:18:54.842992 | orchestrator | 2026-01-03 04:18:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:18:57.898723 | orchestrator | 2026-01-03 04:18:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:18:57.900543 | orchestrator | 2026-01-03 04:18:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:18:57.900622 | orchestrator | 2026-01-03 04:18:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:00.944680 | orchestrator | 2026-01-03 04:19:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:19:00.946075 | orchestrator | 2026-01-03 04:19:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:19:00.946158 | orchestrator | 2026-01-03 04:19:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:03.989675 | orchestrator | 2026-01-03 04:19:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:19:03.990552 | orchestrator | 2026-01-03 04:19:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:19:03.990800 | orchestrator | 2026-01-03 04:19:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:07.033549 | orchestrator | 2026-01-03 04:19:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:19:07.034470 | orchestrator | 2026-01-03 04:19:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:19:07.034503 | orchestrator | 2026-01-03 04:19:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:10.069077 | orchestrator | 2026-01-03 04:19:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:19:10.070737 | orchestrator | 2026-01-03 04:19:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:19:10.070794 | orchestrator | 2026-01-03 04:19:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:13.114070 | orchestrator | 2026-01-03 04:19:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:19:13.115816 | orchestrator | 2026-01-03 04:19:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:19:13.115855 | orchestrator | 2026-01-03 04:19:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:16.157315 | orchestrator | 2026-01-03 04:19:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:19:16.159313 | orchestrator | 2026-01-03 04:19:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:19:16.159394 | orchestrator | 2026-01-03 04:19:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:19.201233 | orchestrator | 2026-01-03 04:19:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:19:19.202730 | orchestrator | 2026-01-03 04:19:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:19:19.203276 | orchestrator | 2026-01-03 04:19:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:22.248258 | orchestrator | 2026-01-03 04:19:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:19:22.249309 | orchestrator | 2026-01-03 04:19:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:19:22.249342 | orchestrator | 2026-01-03 04:19:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:25.288749 | orchestrator | 2026-01-03 04:19:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:19:25.289808 | orchestrator | 2026-01-03 04:19:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:19:25.289937 | orchestrator | 2026-01-03 04:19:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:28.334485 | orchestrator | 2026-01-03 04:19:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:19:28.335690 | orchestrator | 2026-01-03 04:19:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:19:28.335976 | orchestrator | 2026-01-03 04:19:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:31.373732 | orchestrator | 2026-01-03 04:19:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:19:31.375956 | orchestrator | 2026-01-03 04:19:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:19:31.376054 | orchestrator | 2026-01-03 04:19:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:34.422527 | orchestrator | 2026-01-03 04:19:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:19:34.424890 | orchestrator | 2026-01-03 04:19:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:19:34.424936 | orchestrator | 2026-01-03 04:19:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:37.468657 | orchestrator | 2026-01-03 04:19:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:19:37.469728 | orchestrator | 2026-01-03 04:19:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:19:37.469787 | orchestrator | 2026-01-03 04:19:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:40.511331 | orchestrator | 2026-01-03 04:19:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:19:40.512986 | orchestrator | 2026-01-03 04:19:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:19:40.513064 | orchestrator | 2026-01-03 04:19:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:43.558786 | orchestrator | 2026-01-03 04:19:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:19:43.560785 | orchestrator | 2026-01-03 04:19:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:19:43.560865 | orchestrator | 2026-01-03 04:19:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:46.607807 | orchestrator | 2026-01-03 04:19:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:19:46.610393 | orchestrator | 2026-01-03 04:19:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:19:46.610450 | orchestrator | 2026-01-03 04:19:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:49.658786 | orchestrator | 2026-01-03 04:19:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:19:49.660503 | orchestrator | 2026-01-03 04:19:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:19:49.660524 | orchestrator | 2026-01-03 04:19:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:52.707030 | orchestrator | 2026-01-03 04:19:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:19:52.708235 | orchestrator | 2026-01-03 04:19:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:19:52.708280 | orchestrator | 2026-01-03 04:19:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:55.758729 | orchestrator | 2026-01-03 04:19:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:19:55.760704 | orchestrator | 2026-01-03 04:19:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:19:55.760746 | orchestrator | 2026-01-03 04:19:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:19:58.805852 | orchestrator | 2026-01-03 04:19:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:19:58.807733 | orchestrator | 2026-01-03 04:19:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:19:58.807977 | orchestrator | 2026-01-03 04:19:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:01.858218 | orchestrator | 2026-01-03 04:20:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:20:01.860339 | orchestrator | 2026-01-03 04:20:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:20:01.860413 | orchestrator | 2026-01-03 04:20:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:04.903237 | orchestrator | 2026-01-03 04:20:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:20:04.905385 | orchestrator | 2026-01-03 04:20:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:20:04.905444 | orchestrator | 2026-01-03 04:20:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:07.953219 | orchestrator | 2026-01-03 04:20:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:20:07.954635 | orchestrator | 2026-01-03 04:20:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:20:07.954704 | orchestrator | 2026-01-03 04:20:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:11.004108 | orchestrator | 2026-01-03 04:20:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:20:11.006087 | orchestrator | 2026-01-03 04:20:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:20:11.006148 | orchestrator | 2026-01-03 04:20:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:14.051867 | orchestrator | 2026-01-03 04:20:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:20:14.052790 | orchestrator | 2026-01-03 04:20:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:20:14.052833 | orchestrator | 2026-01-03 04:20:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:17.095514 | orchestrator | 2026-01-03 04:20:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:20:17.097028 | orchestrator | 2026-01-03 04:20:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:20:17.097076 | orchestrator | 2026-01-03 04:20:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:20.145532 | orchestrator | 2026-01-03 04:20:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:20:20.147968 | orchestrator | 2026-01-03 04:20:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:20:20.148031 | orchestrator | 2026-01-03 04:20:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:23.195010 | orchestrator | 2026-01-03 04:20:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:20:23.197060 | orchestrator | 2026-01-03 04:20:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:20:23.197122 | orchestrator | 2026-01-03 04:20:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:26.241173 | orchestrator | 2026-01-03 04:20:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:20:26.242088 | orchestrator | 2026-01-03 04:20:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:20:26.242124 | orchestrator | 2026-01-03 04:20:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:29.284887 | orchestrator | 2026-01-03 04:20:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:20:29.285290 | orchestrator | 2026-01-03 04:20:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:20:29.285317 | orchestrator | 2026-01-03 04:20:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:32.328777 | orchestrator | 2026-01-03 04:20:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:20:32.330830 | orchestrator | 2026-01-03 04:20:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:20:32.330888 | orchestrator | 2026-01-03 04:20:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:35.373864 | orchestrator | 2026-01-03 04:20:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:20:35.376806 | orchestrator | 2026-01-03 04:20:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:20:35.376887 | orchestrator | 2026-01-03 04:20:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:38.429116 | orchestrator | 2026-01-03 04:20:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:20:38.431491 | orchestrator | 2026-01-03 04:20:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:20:38.431600 | orchestrator | 2026-01-03 04:20:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:41.478427 | orchestrator | 2026-01-03 04:20:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:20:41.480299 | orchestrator | 2026-01-03 04:20:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:20:41.480362 | orchestrator | 2026-01-03 04:20:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:44.529896 | orchestrator | 2026-01-03 04:20:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:20:44.532121 | orchestrator | 2026-01-03 04:20:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:20:44.532218 | orchestrator | 2026-01-03 04:20:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:47.585938 | orchestrator | 2026-01-03 04:20:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:20:47.587800 | orchestrator | 2026-01-03 04:20:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:20:47.588026 | orchestrator | 2026-01-03 04:20:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:50.629071 | orchestrator | 2026-01-03 04:20:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:20:50.630489 | orchestrator | 2026-01-03 04:20:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:20:50.630741 | orchestrator | 2026-01-03 04:20:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:53.678517 | orchestrator | 2026-01-03 04:20:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:20:53.680210 | orchestrator | 2026-01-03 04:20:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:20:53.680355 | orchestrator | 2026-01-03 04:20:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:56.734186 | orchestrator | 2026-01-03 04:20:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:20:56.736525 | orchestrator | 2026-01-03 04:20:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:20:56.736692 | orchestrator | 2026-01-03 04:20:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:20:59.783100 | orchestrator | 2026-01-03 04:20:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:20:59.784985 | orchestrator | 2026-01-03 04:20:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:20:59.785130 | orchestrator | 2026-01-03 04:20:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:02.834093 | orchestrator | 2026-01-03 04:21:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:21:02.835483 | orchestrator | 2026-01-03 04:21:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:21:02.835765 | orchestrator | 2026-01-03 04:21:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:05.884493 | orchestrator | 2026-01-03 04:21:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:21:05.886343 | orchestrator | 2026-01-03 04:21:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:21:05.886451 | orchestrator | 2026-01-03 04:21:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:08.940848 | orchestrator | 2026-01-03 04:21:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:21:08.942250 | orchestrator | 2026-01-03 04:21:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:21:08.942301 | orchestrator | 2026-01-03 04:21:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:11.993382 | orchestrator | 2026-01-03 04:21:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:21:11.994890 | orchestrator | 2026-01-03 04:21:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:21:11.994950 | orchestrator | 2026-01-03 04:21:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:15.043484 | orchestrator | 2026-01-03 04:21:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:21:15.044970 | orchestrator | 2026-01-03 04:21:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:21:15.045177 | orchestrator | 2026-01-03 04:21:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:18.098298 | orchestrator | 2026-01-03 04:21:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:21:18.100436 | orchestrator | 2026-01-03 04:21:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:21:18.100504 | orchestrator | 2026-01-03 04:21:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:21.144222 | orchestrator | 2026-01-03 04:21:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:21:21.146115 | orchestrator | 2026-01-03 04:21:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:21:21.146168 | orchestrator | 2026-01-03 04:21:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:24.193282 | orchestrator | 2026-01-03 04:21:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:21:24.194319 | orchestrator | 2026-01-03 04:21:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:21:24.194362 | orchestrator | 2026-01-03 04:21:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:27.241826 | orchestrator | 2026-01-03 04:21:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:21:27.244559 | orchestrator | 2026-01-03 04:21:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:21:27.244644 | orchestrator | 2026-01-03 04:21:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:30.294174 | orchestrator | 2026-01-03 04:21:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:21:30.296294 | orchestrator | 2026-01-03 04:21:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:21:30.296625 | orchestrator | 2026-01-03 04:21:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:33.341409 | orchestrator | 2026-01-03 04:21:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:21:33.343198 | orchestrator | 2026-01-03 04:21:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:21:33.343262 | orchestrator | 2026-01-03 04:21:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:36.393235 | orchestrator | 2026-01-03 04:21:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:21:36.394933 | orchestrator | 2026-01-03 04:21:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:21:36.394991 | orchestrator | 2026-01-03 04:21:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:39.436013 | orchestrator | 2026-01-03 04:21:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:21:39.438565 | orchestrator | 2026-01-03 04:21:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:21:39.438647 | orchestrator | 2026-01-03 04:21:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:42.479880 | orchestrator | 2026-01-03 04:21:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:21:42.482625 | orchestrator | 2026-01-03 04:21:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:21:42.482809 | orchestrator | 2026-01-03 04:21:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:45.527796 | orchestrator | 2026-01-03 04:21:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:21:45.531385 | orchestrator | 2026-01-03 04:21:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:21:45.531590 | orchestrator | 2026-01-03 04:21:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:48.574306 | orchestrator | 2026-01-03 04:21:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:21:48.576019 | orchestrator | 2026-01-03 04:21:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:21:48.576064 | orchestrator | 2026-01-03 04:21:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:51.618460 | orchestrator | 2026-01-03 04:21:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:21:51.620131 | orchestrator | 2026-01-03 04:21:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:21:51.620176 | orchestrator | 2026-01-03 04:21:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:54.668393 | orchestrator | 2026-01-03 04:21:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:21:54.670011 | orchestrator | 2026-01-03 04:21:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:21:54.670086 | orchestrator | 2026-01-03 04:21:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:21:57.720244 | orchestrator | 2026-01-03 04:21:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:21:57.723248 | orchestrator | 2026-01-03 04:21:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:21:57.723386 | orchestrator | 2026-01-03 04:21:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:00.764661 | orchestrator | 2026-01-03 04:22:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:22:00.766936 | orchestrator | 2026-01-03 04:22:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:22:00.766994 | orchestrator | 2026-01-03 04:22:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:03.814591 | orchestrator | 2026-01-03 04:22:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:22:03.816067 | orchestrator | 2026-01-03 04:22:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:22:03.816119 | orchestrator | 2026-01-03 04:22:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:06.862391 | orchestrator | 2026-01-03 04:22:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:22:06.864727 | orchestrator | 2026-01-03 04:22:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:22:06.864783 | orchestrator | 2026-01-03 04:22:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:09.912936 | orchestrator | 2026-01-03 04:22:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:22:09.914584 | orchestrator | 2026-01-03 04:22:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:22:09.914646 | orchestrator | 2026-01-03 04:22:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:12.961083 | orchestrator | 2026-01-03 04:22:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:22:12.962687 | orchestrator | 2026-01-03 04:22:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:22:12.962730 | orchestrator | 2026-01-03 04:22:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:16.009543 | orchestrator | 2026-01-03 04:22:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:22:16.011512 | orchestrator | 2026-01-03 04:22:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:22:16.011578 | orchestrator | 2026-01-03 04:22:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:19.058820 | orchestrator | 2026-01-03 04:22:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:22:19.060698 | orchestrator | 2026-01-03 04:22:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:22:19.060744 | orchestrator | 2026-01-03 04:22:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:22.105658 | orchestrator | 2026-01-03 04:22:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:22:22.106622 | orchestrator | 2026-01-03 04:22:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:22:22.106864 | orchestrator | 2026-01-03 04:22:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:25.146493 | orchestrator | 2026-01-03 04:22:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:22:25.147927 | orchestrator | 2026-01-03 04:22:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:22:25.147982 | orchestrator | 2026-01-03 04:22:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:28.193940 | orchestrator | 2026-01-03 04:22:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:22:28.195783 | orchestrator | 2026-01-03 04:22:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:22:28.195893 | orchestrator | 2026-01-03 04:22:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:31.249283 | orchestrator | 2026-01-03 04:22:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:22:31.251268 | orchestrator | 2026-01-03 04:22:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:22:31.251342 | orchestrator | 2026-01-03 04:22:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:34.302356 | orchestrator | 2026-01-03 04:22:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:22:34.304979 | orchestrator | 2026-01-03 04:22:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:22:34.305108 | orchestrator | 2026-01-03 04:22:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:37.357142 | orchestrator | 2026-01-03 04:22:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:22:37.359748 | orchestrator | 2026-01-03 04:22:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:22:37.359841 | orchestrator | 2026-01-03 04:22:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:40.413086 | orchestrator | 2026-01-03 04:22:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:22:40.417525 | orchestrator | 2026-01-03 04:22:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:22:40.417563 | orchestrator | 2026-01-03 04:22:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:43.458255 | orchestrator | 2026-01-03 04:22:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:22:43.459728 | orchestrator | 2026-01-03 04:22:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:22:43.459836 | orchestrator | 2026-01-03 04:22:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:46.499623 | orchestrator | 2026-01-03 04:22:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:22:46.502163 | orchestrator | 2026-01-03 04:22:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:22:46.502210 | orchestrator | 2026-01-03 04:22:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:49.549971 | orchestrator | 2026-01-03 04:22:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:22:49.552963 | orchestrator | 2026-01-03 04:22:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:22:49.553029 | orchestrator | 2026-01-03 04:22:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:52.602844 | orchestrator | 2026-01-03 04:22:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:22:52.604121 | orchestrator | 2026-01-03 04:22:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:22:52.604216 | orchestrator | 2026-01-03 04:22:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:55.650308 | orchestrator | 2026-01-03 04:22:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:22:55.651832 | orchestrator | 2026-01-03 04:22:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:22:55.651903 | orchestrator | 2026-01-03 04:22:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:22:58.695708 | orchestrator | 2026-01-03 04:22:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:22:58.697916 | orchestrator | 2026-01-03 04:22:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:22:58.698757 | orchestrator | 2026-01-03 04:22:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:01.742686 | orchestrator | 2026-01-03 04:23:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:23:01.743911 | orchestrator | 2026-01-03 04:23:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:23:01.743969 | orchestrator | 2026-01-03 04:23:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:04.794859 | orchestrator | 2026-01-03 04:23:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:23:04.796672 | orchestrator | 2026-01-03 04:23:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:23:04.796734 | orchestrator | 2026-01-03 04:23:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:07.840187 | orchestrator | 2026-01-03 04:23:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:23:07.841967 | orchestrator | 2026-01-03 04:23:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:23:07.842217 | orchestrator | 2026-01-03 04:23:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:10.888895 | orchestrator | 2026-01-03 04:23:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:23:10.890657 | orchestrator | 2026-01-03 04:23:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:23:10.890719 | orchestrator | 2026-01-03 04:23:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:13.937061 | orchestrator | 2026-01-03 04:23:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:23:13.939068 | orchestrator | 2026-01-03 04:23:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:23:13.939106 | orchestrator | 2026-01-03 04:23:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:16.986439 | orchestrator | 2026-01-03 04:23:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:23:16.988312 | orchestrator | 2026-01-03 04:23:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:23:16.988558 | orchestrator | 2026-01-03 04:23:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:20.037036 | orchestrator | 2026-01-03 04:23:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:23:20.038242 | orchestrator | 2026-01-03 04:23:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:23:20.038292 | orchestrator | 2026-01-03 04:23:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:23.087025 | orchestrator | 2026-01-03 04:23:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:23:23.087149 | orchestrator | 2026-01-03 04:23:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:23:23.087167 | orchestrator | 2026-01-03 04:23:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:26.139919 | orchestrator | 2026-01-03 04:23:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:23:26.142471 | orchestrator | 2026-01-03 04:23:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:23:26.142630 | orchestrator | 2026-01-03 04:23:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:29.182206 | orchestrator | 2026-01-03 04:23:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:23:29.184090 | orchestrator | 2026-01-03 04:23:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:23:29.184145 | orchestrator | 2026-01-03 04:23:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:32.223821 | orchestrator | 2026-01-03 04:23:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:23:32.226107 | orchestrator | 2026-01-03 04:23:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:23:32.226187 | orchestrator | 2026-01-03 04:23:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:35.272639 | orchestrator | 2026-01-03 04:23:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:23:35.275528 | orchestrator | 2026-01-03 04:23:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:23:35.275863 | orchestrator | 2026-01-03 04:23:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:38.324678 | orchestrator | 2026-01-03 04:23:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:23:38.328721 | orchestrator | 2026-01-03 04:23:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:23:38.328811 | orchestrator | 2026-01-03 04:23:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:41.381989 | orchestrator | 2026-01-03 04:23:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:23:41.383620 | orchestrator | 2026-01-03 04:23:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:23:41.384091 | orchestrator | 2026-01-03 04:23:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:44.431871 | orchestrator | 2026-01-03 04:23:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:23:44.434153 | orchestrator | 2026-01-03 04:23:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:23:44.434216 | orchestrator | 2026-01-03 04:23:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:47.476997 | orchestrator | 2026-01-03 04:23:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:23:47.479129 | orchestrator | 2026-01-03 04:23:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:23:47.479191 | orchestrator | 2026-01-03 04:23:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:50.526084 | orchestrator | 2026-01-03 04:23:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:23:50.528408 | orchestrator | 2026-01-03 04:23:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:23:50.528468 | orchestrator | 2026-01-03 04:23:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:53.571555 | orchestrator | 2026-01-03 04:23:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:23:53.574119 | orchestrator | 2026-01-03 04:23:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:23:53.574346 | orchestrator | 2026-01-03 04:23:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:56.625498 | orchestrator | 2026-01-03 04:23:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:23:56.627381 | orchestrator | 2026-01-03 04:23:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:23:56.627450 | orchestrator | 2026-01-03 04:23:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:23:59.672745 | orchestrator | 2026-01-03 04:23:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:23:59.674257 | orchestrator | 2026-01-03 04:23:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:23:59.674416 | orchestrator | 2026-01-03 04:23:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:02.720963 | orchestrator | 2026-01-03 04:24:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:24:02.721647 | orchestrator | 2026-01-03 04:24:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:24:02.721743 | orchestrator | 2026-01-03 04:24:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:05.765191 | orchestrator | 2026-01-03 04:24:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:24:05.766961 | orchestrator | 2026-01-03 04:24:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:24:05.767002 | orchestrator | 2026-01-03 04:24:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:08.814326 | orchestrator | 2026-01-03 04:24:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:24:08.815683 | orchestrator | 2026-01-03 04:24:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:24:08.815776 | orchestrator | 2026-01-03 04:24:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:11.866416 | orchestrator | 2026-01-03 04:24:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:24:11.868021 | orchestrator | 2026-01-03 04:24:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:24:11.868098 | orchestrator | 2026-01-03 04:24:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:14.915498 | orchestrator | 2026-01-03 04:24:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:24:14.917520 | orchestrator | 2026-01-03 04:24:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:24:14.917627 | orchestrator | 2026-01-03 04:24:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:17.957029 | orchestrator | 2026-01-03 04:24:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:24:17.957506 | orchestrator | 2026-01-03 04:24:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:24:17.957548 | orchestrator | 2026-01-03 04:24:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:21.003909 | orchestrator | 2026-01-03 04:24:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:24:21.005212 | orchestrator | 2026-01-03 04:24:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:24:21.005254 | orchestrator | 2026-01-03 04:24:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:24.056584 | orchestrator | 2026-01-03 04:24:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:24:24.056982 | orchestrator | 2026-01-03 04:24:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:24:24.057009 | orchestrator | 2026-01-03 04:24:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:27.102565 | orchestrator | 2026-01-03 04:24:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:24:27.104572 | orchestrator | 2026-01-03 04:24:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:24:27.104626 | orchestrator | 2026-01-03 04:24:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:30.145773 | orchestrator | 2026-01-03 04:24:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:24:30.148831 | orchestrator | 2026-01-03 04:24:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:24:30.148891 | orchestrator | 2026-01-03 04:24:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:33.200175 | orchestrator | 2026-01-03 04:24:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:24:33.201427 | orchestrator | 2026-01-03 04:24:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:24:33.201456 | orchestrator | 2026-01-03 04:24:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:36.255486 | orchestrator | 2026-01-03 04:24:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:24:36.257376 | orchestrator | 2026-01-03 04:24:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:24:36.257474 | orchestrator | 2026-01-03 04:24:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:39.299995 | orchestrator | 2026-01-03 04:24:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:24:39.303708 | orchestrator | 2026-01-03 04:24:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:24:39.303764 | orchestrator | 2026-01-03 04:24:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:42.354388 | orchestrator | 2026-01-03 04:24:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:24:42.357627 | orchestrator | 2026-01-03 04:24:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:24:42.357683 | orchestrator | 2026-01-03 04:24:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:45.403703 | orchestrator | 2026-01-03 04:24:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:24:45.404057 | orchestrator | 2026-01-03 04:24:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:24:45.404129 | orchestrator | 2026-01-03 04:24:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:48.451548 | orchestrator | 2026-01-03 04:24:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:24:48.454212 | orchestrator | 2026-01-03 04:24:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:24:48.454274 | orchestrator | 2026-01-03 04:24:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:51.500653 | orchestrator | 2026-01-03 04:24:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:24:51.502469 | orchestrator | 2026-01-03 04:24:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:24:51.502538 | orchestrator | 2026-01-03 04:24:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:54.546809 | orchestrator | 2026-01-03 04:24:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:24:54.548255 | orchestrator | 2026-01-03 04:24:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:24:54.548542 | orchestrator | 2026-01-03 04:24:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:24:57.594393 | orchestrator | 2026-01-03 04:24:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:24:57.597177 | orchestrator | 2026-01-03 04:24:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:24:57.597215 | orchestrator | 2026-01-03 04:24:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:00.642777 | orchestrator | 2026-01-03 04:25:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:25:00.645227 | orchestrator | 2026-01-03 04:25:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:25:00.645267 | orchestrator | 2026-01-03 04:25:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:03.689897 | orchestrator | 2026-01-03 04:25:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:25:03.691649 | orchestrator | 2026-01-03 04:25:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:25:03.691689 | orchestrator | 2026-01-03 04:25:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:06.735131 | orchestrator | 2026-01-03 04:25:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:25:06.735594 | orchestrator | 2026-01-03 04:25:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:25:06.735618 | orchestrator | 2026-01-03 04:25:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:09.781649 | orchestrator | 2026-01-03 04:25:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:25:09.782820 | orchestrator | 2026-01-03 04:25:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:25:09.782956 | orchestrator | 2026-01-03 04:25:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:12.833507 | orchestrator | 2026-01-03 04:25:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:25:12.834396 | orchestrator | 2026-01-03 04:25:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:25:12.834468 | orchestrator | 2026-01-03 04:25:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:15.880192 | orchestrator | 2026-01-03 04:25:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:25:15.882717 | orchestrator | 2026-01-03 04:25:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:25:15.882784 | orchestrator | 2026-01-03 04:25:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:18.931835 | orchestrator | 2026-01-03 04:25:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:25:18.933976 | orchestrator | 2026-01-03 04:25:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:25:18.934073 | orchestrator | 2026-01-03 04:25:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:21.976420 | orchestrator | 2026-01-03 04:25:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:25:21.978322 | orchestrator | 2026-01-03 04:25:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:25:21.978381 | orchestrator | 2026-01-03 04:25:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:25.031602 | orchestrator | 2026-01-03 04:25:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:25:25.033480 | orchestrator | 2026-01-03 04:25:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:25:25.033511 | orchestrator | 2026-01-03 04:25:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:28.073942 | orchestrator | 2026-01-03 04:25:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:25:28.076034 | orchestrator | 2026-01-03 04:25:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:25:28.076136 | orchestrator | 2026-01-03 04:25:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:31.120997 | orchestrator | 2026-01-03 04:25:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:25:31.124505 | orchestrator | 2026-01-03 04:25:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:25:31.124648 | orchestrator | 2026-01-03 04:25:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:34.170744 | orchestrator | 2026-01-03 04:25:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:25:34.173015 | orchestrator | 2026-01-03 04:25:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:25:34.173104 | orchestrator | 2026-01-03 04:25:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:37.227702 | orchestrator | 2026-01-03 04:25:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:25:37.229936 | orchestrator | 2026-01-03 04:25:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:25:37.229996 | orchestrator | 2026-01-03 04:25:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:40.285277 | orchestrator | 2026-01-03 04:25:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:25:40.286189 | orchestrator | 2026-01-03 04:25:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:25:40.286299 | orchestrator | 2026-01-03 04:25:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:43.338308 | orchestrator | 2026-01-03 04:25:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:25:43.340988 | orchestrator | 2026-01-03 04:25:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:25:43.341060 | orchestrator | 2026-01-03 04:25:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:46.392691 | orchestrator | 2026-01-03 04:25:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:25:46.393993 | orchestrator | 2026-01-03 04:25:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:25:46.394134 | orchestrator | 2026-01-03 04:25:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:49.437348 | orchestrator | 2026-01-03 04:25:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:25:49.438985 | orchestrator | 2026-01-03 04:25:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:25:49.439193 | orchestrator | 2026-01-03 04:25:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:52.483710 | orchestrator | 2026-01-03 04:25:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:25:52.485110 | orchestrator | 2026-01-03 04:25:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:25:52.485172 | orchestrator | 2026-01-03 04:25:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:55.529134 | orchestrator | 2026-01-03 04:25:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:25:55.530347 | orchestrator | 2026-01-03 04:25:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:25:55.530373 | orchestrator | 2026-01-03 04:25:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:25:58.575409 | orchestrator | 2026-01-03 04:25:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:25:58.576821 | orchestrator | 2026-01-03 04:25:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:25:58.577175 | orchestrator | 2026-01-03 04:25:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:01.619027 | orchestrator | 2026-01-03 04:26:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:26:01.621013 | orchestrator | 2026-01-03 04:26:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:26:01.621055 | orchestrator | 2026-01-03 04:26:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:04.669509 | orchestrator | 2026-01-03 04:26:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:26:04.671554 | orchestrator | 2026-01-03 04:26:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:26:04.671776 | orchestrator | 2026-01-03 04:26:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:07.718956 | orchestrator | 2026-01-03 04:26:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:26:07.720681 | orchestrator | 2026-01-03 04:26:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:26:07.720741 | orchestrator | 2026-01-03 04:26:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:10.764159 | orchestrator | 2026-01-03 04:26:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:26:10.765759 | orchestrator | 2026-01-03 04:26:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:26:10.765792 | orchestrator | 2026-01-03 04:26:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:13.818843 | orchestrator | 2026-01-03 04:26:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:26:13.820153 | orchestrator | 2026-01-03 04:26:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:26:13.820331 | orchestrator | 2026-01-03 04:26:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:16.866086 | orchestrator | 2026-01-03 04:26:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:26:16.868795 | orchestrator | 2026-01-03 04:26:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:26:16.868853 | orchestrator | 2026-01-03 04:26:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:19.912525 | orchestrator | 2026-01-03 04:26:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:26:19.914354 | orchestrator | 2026-01-03 04:26:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:26:19.914409 | orchestrator | 2026-01-03 04:26:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:22.956930 | orchestrator | 2026-01-03 04:26:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:26:22.959397 | orchestrator | 2026-01-03 04:26:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:26:22.959441 | orchestrator | 2026-01-03 04:26:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:26.006409 | orchestrator | 2026-01-03 04:26:26 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:26:26.009966 | orchestrator | 2026-01-03 04:26:26 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:26:26.010541 | orchestrator | 2026-01-03 04:26:26 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:29.060474 | orchestrator | 2026-01-03 04:26:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:26:29.061085 | orchestrator | 2026-01-03 04:26:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:26:29.061108 | orchestrator | 2026-01-03 04:26:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:32.109021 | orchestrator | 2026-01-03 04:26:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:26:32.112757 | orchestrator | 2026-01-03 04:26:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:26:32.112817 | orchestrator | 2026-01-03 04:26:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:35.159794 | orchestrator | 2026-01-03 04:26:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:26:35.161856 | orchestrator | 2026-01-03 04:26:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:26:35.161917 | orchestrator | 2026-01-03 04:26:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:38.210965 | orchestrator | 2026-01-03 04:26:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:26:38.215493 | orchestrator | 2026-01-03 04:26:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:26:38.215596 | orchestrator | 2026-01-03 04:26:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:41.267846 | orchestrator | 2026-01-03 04:26:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:26:41.269718 | orchestrator | 2026-01-03 04:26:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:26:41.269764 | orchestrator | 2026-01-03 04:26:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:44.319921 | orchestrator | 2026-01-03 04:26:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:26:44.321600 | orchestrator | 2026-01-03 04:26:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:26:44.321721 | orchestrator | 2026-01-03 04:26:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:47.371130 | orchestrator | 2026-01-03 04:26:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:26:47.372755 | orchestrator | 2026-01-03 04:26:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:26:47.372856 | orchestrator | 2026-01-03 04:26:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:50.419056 | orchestrator | 2026-01-03 04:26:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:26:50.420658 | orchestrator | 2026-01-03 04:26:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:26:50.421117 | orchestrator | 2026-01-03 04:26:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:53.473037 | orchestrator | 2026-01-03 04:26:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:26:53.475653 | orchestrator | 2026-01-03 04:26:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:26:53.475759 | orchestrator | 2026-01-03 04:26:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:56.522124 | orchestrator | 2026-01-03 04:26:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:26:56.523394 | orchestrator | 2026-01-03 04:26:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:26:56.523552 | orchestrator | 2026-01-03 04:26:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:26:59.577156 | orchestrator | 2026-01-03 04:26:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:26:59.578442 | orchestrator | 2026-01-03 04:26:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:26:59.578481 | orchestrator | 2026-01-03 04:26:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:02.627411 | orchestrator | 2026-01-03 04:27:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:27:02.628791 | orchestrator | 2026-01-03 04:27:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:27:02.628831 | orchestrator | 2026-01-03 04:27:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:05.678102 | orchestrator | 2026-01-03 04:27:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:27:05.679794 | orchestrator | 2026-01-03 04:27:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:27:05.679872 | orchestrator | 2026-01-03 04:27:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:08.729071 | orchestrator | 2026-01-03 04:27:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:27:08.729548 | orchestrator | 2026-01-03 04:27:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:27:08.729819 | orchestrator | 2026-01-03 04:27:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:11.782312 | orchestrator | 2026-01-03 04:27:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:27:11.783548 | orchestrator | 2026-01-03 04:27:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:27:11.783765 | orchestrator | 2026-01-03 04:27:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:14.834335 | orchestrator | 2026-01-03 04:27:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:27:14.835285 | orchestrator | 2026-01-03 04:27:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:27:14.835551 | orchestrator | 2026-01-03 04:27:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:17.883757 | orchestrator | 2026-01-03 04:27:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:27:17.885514 | orchestrator | 2026-01-03 04:27:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:27:17.885602 | orchestrator | 2026-01-03 04:27:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:20.933802 | orchestrator | 2026-01-03 04:27:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:27:20.935760 | orchestrator | 2026-01-03 04:27:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:27:20.935812 | orchestrator | 2026-01-03 04:27:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:23.982910 | orchestrator | 2026-01-03 04:27:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:27:23.984695 | orchestrator | 2026-01-03 04:27:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:27:23.984775 | orchestrator | 2026-01-03 04:27:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:27.034674 | orchestrator | 2026-01-03 04:27:27 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:27:27.036682 | orchestrator | 2026-01-03 04:27:27 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:27:27.036738 | orchestrator | 2026-01-03 04:27:27 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:30.084700 | orchestrator | 2026-01-03 04:27:30 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:27:30.086940 | orchestrator | 2026-01-03 04:27:30 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:27:30.086992 | orchestrator | 2026-01-03 04:27:30 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:33.129985 | orchestrator | 2026-01-03 04:27:33 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:27:33.132385 | orchestrator | 2026-01-03 04:27:33 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:27:33.132464 | orchestrator | 2026-01-03 04:27:33 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:36.181008 | orchestrator | 2026-01-03 04:27:36 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:27:36.182398 | orchestrator | 2026-01-03 04:27:36 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:27:36.182446 | orchestrator | 2026-01-03 04:27:36 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:39.228117 | orchestrator | 2026-01-03 04:27:39 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:27:39.229440 | orchestrator | 2026-01-03 04:27:39 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:27:39.229478 | orchestrator | 2026-01-03 04:27:39 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:42.281229 | orchestrator | 2026-01-03 04:27:42 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:27:42.283658 | orchestrator | 2026-01-03 04:27:42 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:27:42.283705 | orchestrator | 2026-01-03 04:27:42 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:45.333335 | orchestrator | 2026-01-03 04:27:45 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:27:45.334320 | orchestrator | 2026-01-03 04:27:45 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:27:45.334394 | orchestrator | 2026-01-03 04:27:45 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:48.382954 | orchestrator | 2026-01-03 04:27:48 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:27:48.384646 | orchestrator | 2026-01-03 04:27:48 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:27:48.384812 | orchestrator | 2026-01-03 04:27:48 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:51.419556 | orchestrator | 2026-01-03 04:27:51 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:27:51.422371 | orchestrator | 2026-01-03 04:27:51 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:27:51.422448 | orchestrator | 2026-01-03 04:27:51 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:54.473005 | orchestrator | 2026-01-03 04:27:54 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:27:54.474796 | orchestrator | 2026-01-03 04:27:54 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:27:54.474927 | orchestrator | 2026-01-03 04:27:54 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:27:57.520438 | orchestrator | 2026-01-03 04:27:57 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:27:57.521512 | orchestrator | 2026-01-03 04:27:57 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:27:57.521570 | orchestrator | 2026-01-03 04:27:57 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:00.571822 | orchestrator | 2026-01-03 04:28:00 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:28:00.573445 | orchestrator | 2026-01-03 04:28:00 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:28:00.573523 | orchestrator | 2026-01-03 04:28:00 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:03.626001 | orchestrator | 2026-01-03 04:28:03 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:28:03.626758 | orchestrator | 2026-01-03 04:28:03 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:28:03.626904 | orchestrator | 2026-01-03 04:28:03 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:06.673373 | orchestrator | 2026-01-03 04:28:06 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:28:06.675271 | orchestrator | 2026-01-03 04:28:06 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:28:06.675303 | orchestrator | 2026-01-03 04:28:06 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:09.723397 | orchestrator | 2026-01-03 04:28:09 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:28:09.726603 | orchestrator | 2026-01-03 04:28:09 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:28:09.726659 | orchestrator | 2026-01-03 04:28:09 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:12.773619 | orchestrator | 2026-01-03 04:28:12 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:28:12.776460 | orchestrator | 2026-01-03 04:28:12 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:28:12.776516 | orchestrator | 2026-01-03 04:28:12 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:15.825010 | orchestrator | 2026-01-03 04:28:15 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:28:15.826946 | orchestrator | 2026-01-03 04:28:15 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:28:15.826986 | orchestrator | 2026-01-03 04:28:15 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:18.877811 | orchestrator | 2026-01-03 04:28:18 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:28:18.879486 | orchestrator | 2026-01-03 04:28:18 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:28:18.879533 | orchestrator | 2026-01-03 04:28:18 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:21.925533 | orchestrator | 2026-01-03 04:28:21 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:28:21.927984 | orchestrator | 2026-01-03 04:28:21 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:28:21.928068 | orchestrator | 2026-01-03 04:28:21 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:24.977098 | orchestrator | 2026-01-03 04:28:24 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:28:24.978700 | orchestrator | 2026-01-03 04:28:24 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:28:24.978758 | orchestrator | 2026-01-03 04:28:24 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:28.030590 | orchestrator | 2026-01-03 04:28:28 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:28:28.031533 | orchestrator | 2026-01-03 04:28:28 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:28:28.031564 | orchestrator | 2026-01-03 04:28:28 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:31.076785 | orchestrator | 2026-01-03 04:28:31 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:28:31.079810 | orchestrator | 2026-01-03 04:28:31 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:28:31.079891 | orchestrator | 2026-01-03 04:28:31 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:34.127682 | orchestrator | 2026-01-03 04:28:34 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:28:34.128827 | orchestrator | 2026-01-03 04:28:34 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:28:34.128949 | orchestrator | 2026-01-03 04:28:34 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:37.162869 | orchestrator | 2026-01-03 04:28:37 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:28:37.163931 | orchestrator | 2026-01-03 04:28:37 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:28:37.163983 | orchestrator | 2026-01-03 04:28:37 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:40.204003 | orchestrator | 2026-01-03 04:28:40 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:28:40.206087 | orchestrator | 2026-01-03 04:28:40 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:28:40.206224 | orchestrator | 2026-01-03 04:28:40 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:43.251063 | orchestrator | 2026-01-03 04:28:43 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:28:43.252015 | orchestrator | 2026-01-03 04:28:43 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:28:43.252170 | orchestrator | 2026-01-03 04:28:43 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:46.298950 | orchestrator | 2026-01-03 04:28:46 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:28:46.300181 | orchestrator | 2026-01-03 04:28:46 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:28:46.300269 | orchestrator | 2026-01-03 04:28:46 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:49.351330 | orchestrator | 2026-01-03 04:28:49 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:28:49.354380 | orchestrator | 2026-01-03 04:28:49 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:28:49.354465 | orchestrator | 2026-01-03 04:28:49 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:52.411295 | orchestrator | 2026-01-03 04:28:52 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:28:52.413750 | orchestrator | 2026-01-03 04:28:52 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:28:52.413818 | orchestrator | 2026-01-03 04:28:52 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:55.468894 | orchestrator | 2026-01-03 04:28:55 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:28:55.470290 | orchestrator | 2026-01-03 04:28:55 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:28:55.470371 | orchestrator | 2026-01-03 04:28:55 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:28:58.527531 | orchestrator | 2026-01-03 04:28:58 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:28:58.530347 | orchestrator | 2026-01-03 04:28:58 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:28:58.530445 | orchestrator | 2026-01-03 04:28:58 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:01.572058 | orchestrator | 2026-01-03 04:29:01 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:29:01.573487 | orchestrator | 2026-01-03 04:29:01 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:29:01.573855 | orchestrator | 2026-01-03 04:29:01 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:04.621463 | orchestrator | 2026-01-03 04:29:04 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:29:04.625931 | orchestrator | 2026-01-03 04:29:04 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:29:04.626010 | orchestrator | 2026-01-03 04:29:04 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:07.674947 | orchestrator | 2026-01-03 04:29:07 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:29:07.676880 | orchestrator | 2026-01-03 04:29:07 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:29:07.676986 | orchestrator | 2026-01-03 04:29:07 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:10.723795 | orchestrator | 2026-01-03 04:29:10 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:29:10.725672 | orchestrator | 2026-01-03 04:29:10 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:29:10.725750 | orchestrator | 2026-01-03 04:29:10 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:13.774336 | orchestrator | 2026-01-03 04:29:13 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:29:13.776382 | orchestrator | 2026-01-03 04:29:13 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:29:13.776436 | orchestrator | 2026-01-03 04:29:13 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:16.825920 | orchestrator | 2026-01-03 04:29:16 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:29:16.826868 | orchestrator | 2026-01-03 04:29:16 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:29:16.826922 | orchestrator | 2026-01-03 04:29:16 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:19.874162 | orchestrator | 2026-01-03 04:29:19 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:29:19.876053 | orchestrator | 2026-01-03 04:29:19 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:29:19.876305 | orchestrator | 2026-01-03 04:29:19 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:22.921278 | orchestrator | 2026-01-03 04:29:22 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:29:22.922766 | orchestrator | 2026-01-03 04:29:22 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:29:22.922870 | orchestrator | 2026-01-03 04:29:22 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:25.974234 | orchestrator | 2026-01-03 04:29:25 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:29:25.975938 | orchestrator | 2026-01-03 04:29:25 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:29:25.976041 | orchestrator | 2026-01-03 04:29:25 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:29.025441 | orchestrator | 2026-01-03 04:29:29 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:29:29.025955 | orchestrator | 2026-01-03 04:29:29 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:29:29.025987 | orchestrator | 2026-01-03 04:29:29 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:32.072813 | orchestrator | 2026-01-03 04:29:32 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:29:32.074407 | orchestrator | 2026-01-03 04:29:32 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:29:32.074465 | orchestrator | 2026-01-03 04:29:32 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:35.123459 | orchestrator | 2026-01-03 04:29:35 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:29:35.124604 | orchestrator | 2026-01-03 04:29:35 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:29:35.124801 | orchestrator | 2026-01-03 04:29:35 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:38.169752 | orchestrator | 2026-01-03 04:29:38 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:29:38.170406 | orchestrator | 2026-01-03 04:29:38 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:29:38.170436 | orchestrator | 2026-01-03 04:29:38 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:41.214507 | orchestrator | 2026-01-03 04:29:41 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:29:41.216216 | orchestrator | 2026-01-03 04:29:41 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:29:41.216311 | orchestrator | 2026-01-03 04:29:41 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:44.260907 | orchestrator | 2026-01-03 04:29:44 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:29:44.263680 | orchestrator | 2026-01-03 04:29:44 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:29:44.263815 | orchestrator | 2026-01-03 04:29:44 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:47.310311 | orchestrator | 2026-01-03 04:29:47 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:29:47.311461 | orchestrator | 2026-01-03 04:29:47 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:29:47.311502 | orchestrator | 2026-01-03 04:29:47 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:50.362358 | orchestrator | 2026-01-03 04:29:50 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:29:50.363797 | orchestrator | 2026-01-03 04:29:50 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:29:50.363845 | orchestrator | 2026-01-03 04:29:50 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:53.407994 | orchestrator | 2026-01-03 04:29:53 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:29:53.410182 | orchestrator | 2026-01-03 04:29:53 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:29:53.410224 | orchestrator | 2026-01-03 04:29:53 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:56.451008 | orchestrator | 2026-01-03 04:29:56 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:29:56.452629 | orchestrator | 2026-01-03 04:29:56 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:29:56.452693 | orchestrator | 2026-01-03 04:29:56 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:29:59.499142 | orchestrator | 2026-01-03 04:29:59 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:29:59.500894 | orchestrator | 2026-01-03 04:29:59 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:29:59.500950 | orchestrator | 2026-01-03 04:29:59 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:30:02.548837 | orchestrator | 2026-01-03 04:30:02 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:30:02.552504 | orchestrator | 2026-01-03 04:30:02 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:30:02.552560 | orchestrator | 2026-01-03 04:30:02 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:30:05.603376 | orchestrator | 2026-01-03 04:30:05 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:30:05.604572 | orchestrator | 2026-01-03 04:30:05 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:30:05.604609 | orchestrator | 2026-01-03 04:30:05 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:30:08.655740 | orchestrator | 2026-01-03 04:30:08 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:30:08.658166 | orchestrator | 2026-01-03 04:30:08 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:30:08.658240 | orchestrator | 2026-01-03 04:30:08 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:30:11.703974 | orchestrator | 2026-01-03 04:30:11 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:30:11.706187 | orchestrator | 2026-01-03 04:30:11 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:30:11.706259 | orchestrator | 2026-01-03 04:30:11 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:30:14.756477 | orchestrator | 2026-01-03 04:30:14 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:30:14.757705 | orchestrator | 2026-01-03 04:30:14 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:30:14.757746 | orchestrator | 2026-01-03 04:30:14 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:30:17.806638 | orchestrator | 2026-01-03 04:30:17 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:30:17.806734 | orchestrator | 2026-01-03 04:30:17 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:30:17.806746 | orchestrator | 2026-01-03 04:30:17 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:30:20.856499 | orchestrator | 2026-01-03 04:30:20 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:30:20.857584 | orchestrator | 2026-01-03 04:30:20 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:30:20.857634 | orchestrator | 2026-01-03 04:30:20 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:30:23.901196 | orchestrator | 2026-01-03 04:30:23 | INFO  | Task c8921031-5513-4b9c-ad45-42b04eed7ef5 is in state STARTED 2026-01-03 04:30:23.901323 | orchestrator | 2026-01-03 04:30:23 | INFO  | Task bba6cba5-900c-422c-89b8-94737fda4049 is in state STARTED 2026-01-03 04:30:23.901350 | orchestrator | 2026-01-03 04:30:23 | INFO  | Wait 1 second(s) until the next check 2026-01-03 04:30:24.702000 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-01-03 04:30:24.703860 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-03 04:30:25.521123 | 2026-01-03 04:30:25.521306 | PLAY [Post output play] 2026-01-03 04:30:25.547971 | 2026-01-03 04:30:25.548146 | LOOP [stage-output : Register sources] 2026-01-03 04:30:25.619473 | 2026-01-03 04:30:25.619765 | TASK [stage-output : Check sudo] 2026-01-03 04:30:26.502128 | orchestrator | sudo: a password is required 2026-01-03 04:30:26.670139 | orchestrator | ok: Runtime: 0:00:00.010901 2026-01-03 04:30:26.684549 | 2026-01-03 04:30:26.684731 | LOOP [stage-output : Set source and destination for files and folders] 2026-01-03 04:30:26.723041 | 2026-01-03 04:30:26.723375 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-01-03 04:30:26.795680 | orchestrator | ok 2026-01-03 04:30:26.802761 | 2026-01-03 04:30:26.802922 | LOOP [stage-output : Ensure target folders exist] 2026-01-03 04:30:27.281208 | orchestrator | ok: "docs" 2026-01-03 04:30:27.281538 | 2026-01-03 04:30:27.541956 | orchestrator | ok: "artifacts" 2026-01-03 04:30:27.810992 | orchestrator | ok: "logs" 2026-01-03 04:30:27.832827 | 2026-01-03 04:30:27.833011 | LOOP [stage-output : Copy files and folders to staging folder] 2026-01-03 04:30:27.864817 | 2026-01-03 04:30:27.865025 | TASK [stage-output : Make all log files readable] 2026-01-03 04:30:28.183500 | orchestrator | ok 2026-01-03 04:30:28.200542 | 2026-01-03 04:30:28.200787 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-01-03 04:30:28.239350 | orchestrator | skipping: Conditional result was False 2026-01-03 04:30:28.248144 | 2026-01-03 04:30:28.248290 | TASK [stage-output : Discover log files for compression] 2026-01-03 04:30:28.272563 | orchestrator | skipping: Conditional result was False 2026-01-03 04:30:28.280505 | 2026-01-03 04:30:28.280637 | LOOP [stage-output : Archive everything from logs] 2026-01-03 04:30:28.322252 | 2026-01-03 04:30:28.322431 | PLAY [Post cleanup play] 2026-01-03 04:30:28.360458 | 2026-01-03 04:30:28.360596 | TASK [Set cloud fact (Zuul deployment)] 2026-01-03 04:30:28.410955 | orchestrator | ok 2026-01-03 04:30:28.419835 | 2026-01-03 04:30:28.419957 | TASK [Set cloud fact (local deployment)] 2026-01-03 04:30:28.454324 | orchestrator | skipping: Conditional result was False 2026-01-03 04:30:28.466371 | 2026-01-03 04:30:28.466519 | TASK [Clean the cloud environment] 2026-01-03 04:30:30.907358 | orchestrator | 2026-01-03 04:30:30 - clean up servers 2026-01-03 04:30:31.835274 | orchestrator | 2026-01-03 04:30:31 - testbed-manager 2026-01-03 04:30:31.931637 | orchestrator | 2026-01-03 04:30:31 - testbed-node-2 2026-01-03 04:30:32.019929 | orchestrator | 2026-01-03 04:30:32 - testbed-node-5 2026-01-03 04:30:32.115457 | orchestrator | 2026-01-03 04:30:32 - testbed-node-0 2026-01-03 04:30:32.202166 | orchestrator | 2026-01-03 04:30:32 - testbed-node-4 2026-01-03 04:30:32.298527 | orchestrator | 2026-01-03 04:30:32 - testbed-node-1 2026-01-03 04:30:32.386805 | orchestrator | 2026-01-03 04:30:32 - testbed-node-3 2026-01-03 04:30:32.482152 | orchestrator | 2026-01-03 04:30:32 - clean up keypairs 2026-01-03 04:30:32.501349 | orchestrator | 2026-01-03 04:30:32 - testbed 2026-01-03 04:30:32.526081 | orchestrator | 2026-01-03 04:30:32 - wait for servers to be gone 2026-01-03 04:30:47.821947 | orchestrator | 2026-01-03 04:30:47 - clean up ports 2026-01-03 04:30:48.062690 | orchestrator | 2026-01-03 04:30:48 - 3956e3f5-5c59-4347-af76-e0750641b8fa 2026-01-03 04:30:48.367508 | orchestrator | 2026-01-03 04:30:48 - bb251481-0f92-4a14-9d72-53c3879a4dd9 2026-01-03 04:30:48.660112 | orchestrator | 2026-01-03 04:30:48 - c4a46c62-0b10-422b-9258-560a8fbbe574 2026-01-03 04:30:49.005730 | orchestrator | 2026-01-03 04:30:49 - d3c9b0b1-331a-483c-8691-e473bbd9dea5 2026-01-03 04:30:49.268194 | orchestrator | 2026-01-03 04:30:49 - d9d978d1-841e-4bae-b717-417b6174655d 2026-01-03 04:30:49.646322 | orchestrator | 2026-01-03 04:30:49 - dff11b58-fb44-4b93-bf0b-24cfded620b5 2026-01-03 04:30:49.917204 | orchestrator | 2026-01-03 04:30:49 - eb3f2189-92eb-4375-a01d-b30b4b327bf1 2026-01-03 04:30:50.390801 | orchestrator | 2026-01-03 04:30:50 - clean up volumes 2026-01-03 04:30:50.539682 | orchestrator | 2026-01-03 04:30:50 - testbed-volume-3-node-base 2026-01-03 04:30:50.581763 | orchestrator | 2026-01-03 04:30:50 - testbed-volume-4-node-base 2026-01-03 04:30:50.625238 | orchestrator | 2026-01-03 04:30:50 - testbed-volume-manager-base 2026-01-03 04:30:50.667297 | orchestrator | 2026-01-03 04:30:50 - testbed-volume-0-node-base 2026-01-03 04:30:50.715728 | orchestrator | 2026-01-03 04:30:50 - testbed-volume-5-node-base 2026-01-03 04:30:50.758624 | orchestrator | 2026-01-03 04:30:50 - testbed-volume-1-node-base 2026-01-03 04:30:50.803157 | orchestrator | 2026-01-03 04:30:50 - testbed-volume-2-node-base 2026-01-03 04:30:50.855710 | orchestrator | 2026-01-03 04:30:50 - testbed-volume-4-node-4 2026-01-03 04:30:50.910287 | orchestrator | 2026-01-03 04:30:50 - testbed-volume-2-node-5 2026-01-03 04:30:50.951267 | orchestrator | 2026-01-03 04:30:50 - testbed-volume-6-node-3 2026-01-03 04:30:50.995812 | orchestrator | 2026-01-03 04:30:50 - testbed-volume-1-node-4 2026-01-03 04:30:51.047311 | orchestrator | 2026-01-03 04:30:51 - testbed-volume-3-node-3 2026-01-03 04:30:51.091365 | orchestrator | 2026-01-03 04:30:51 - testbed-volume-5-node-5 2026-01-03 04:30:51.136113 | orchestrator | 2026-01-03 04:30:51 - testbed-volume-8-node-5 2026-01-03 04:30:51.188858 | orchestrator | 2026-01-03 04:30:51 - testbed-volume-7-node-4 2026-01-03 04:30:51.240823 | orchestrator | 2026-01-03 04:30:51 - testbed-volume-0-node-3 2026-01-03 04:30:51.287150 | orchestrator | 2026-01-03 04:30:51 - disconnect routers 2026-01-03 04:30:51.420799 | orchestrator | 2026-01-03 04:30:51 - testbed 2026-01-03 04:30:53.273955 | orchestrator | 2026-01-03 04:30:53 - clean up subnets 2026-01-03 04:30:53.322887 | orchestrator | 2026-01-03 04:30:53 - subnet-testbed-management 2026-01-03 04:30:53.590314 | orchestrator | 2026-01-03 04:30:53 - clean up networks 2026-01-03 04:30:53.789105 | orchestrator | 2026-01-03 04:30:53 - net-testbed-management 2026-01-03 04:30:54.139930 | orchestrator | 2026-01-03 04:30:54 - clean up security groups 2026-01-03 04:30:54.185433 | orchestrator | 2026-01-03 04:30:54 - testbed-management 2026-01-03 04:30:54.360759 | orchestrator | 2026-01-03 04:30:54 - testbed-node 2026-01-03 04:30:54.534417 | orchestrator | 2026-01-03 04:30:54 - clean up floating ips 2026-01-03 04:30:54.574667 | orchestrator | 2026-01-03 04:30:54 - 81.163.193.18 2026-01-03 04:30:54.956913 | orchestrator | 2026-01-03 04:30:54 - clean up routers 2026-01-03 04:30:55.092108 | orchestrator | 2026-01-03 04:30:55 - testbed 2026-01-03 04:30:56.530261 | orchestrator | ok: Runtime: 0:00:27.287743 2026-01-03 04:30:56.534913 | 2026-01-03 04:30:56.535095 | PLAY RECAP 2026-01-03 04:30:56.535226 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-01-03 04:30:56.535287 | 2026-01-03 04:30:56.686939 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-03 04:30:56.688033 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-03 04:30:57.479065 | 2026-01-03 04:30:57.479247 | PLAY [Cleanup play] 2026-01-03 04:30:57.495895 | 2026-01-03 04:30:57.496051 | TASK [Set cloud fact (Zuul deployment)] 2026-01-03 04:30:57.555193 | orchestrator | ok 2026-01-03 04:30:57.564517 | 2026-01-03 04:30:57.564703 | TASK [Set cloud fact (local deployment)] 2026-01-03 04:30:57.601369 | orchestrator | skipping: Conditional result was False 2026-01-03 04:30:57.618742 | 2026-01-03 04:30:57.618933 | TASK [Clean the cloud environment] 2026-01-03 04:30:58.793325 | orchestrator | 2026-01-03 04:30:58 - clean up servers 2026-01-03 04:30:59.400485 | orchestrator | 2026-01-03 04:30:59 - clean up keypairs 2026-01-03 04:30:59.419799 | orchestrator | 2026-01-03 04:30:59 - wait for servers to be gone 2026-01-03 04:30:59.463846 | orchestrator | 2026-01-03 04:30:59 - clean up ports 2026-01-03 04:30:59.545348 | orchestrator | 2026-01-03 04:30:59 - clean up volumes 2026-01-03 04:30:59.606857 | orchestrator | 2026-01-03 04:30:59 - disconnect routers 2026-01-03 04:30:59.631685 | orchestrator | 2026-01-03 04:30:59 - clean up subnets 2026-01-03 04:30:59.650998 | orchestrator | 2026-01-03 04:30:59 - clean up networks 2026-01-03 04:30:59.873225 | orchestrator | 2026-01-03 04:30:59 - clean up security groups 2026-01-03 04:30:59.910887 | orchestrator | 2026-01-03 04:30:59 - clean up floating ips 2026-01-03 04:30:59.941497 | orchestrator | 2026-01-03 04:30:59 - clean up routers 2026-01-03 04:31:00.159452 | orchestrator | ok: Runtime: 0:00:01.551821 2026-01-03 04:31:00.163594 | 2026-01-03 04:31:00.163799 | PLAY RECAP 2026-01-03 04:31:00.163937 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-01-03 04:31:00.164006 | 2026-01-03 04:31:00.301646 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-03 04:31:00.304633 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-03 04:31:01.108884 | 2026-01-03 04:31:01.109070 | PLAY [Base post-fetch] 2026-01-03 04:31:01.129478 | 2026-01-03 04:31:01.129749 | TASK [fetch-output : Set log path for multiple nodes] 2026-01-03 04:31:01.217075 | orchestrator | skipping: Conditional result was False 2026-01-03 04:31:01.233470 | 2026-01-03 04:31:01.233777 | TASK [fetch-output : Set log path for single node] 2026-01-03 04:31:01.283919 | orchestrator | ok 2026-01-03 04:31:01.292294 | 2026-01-03 04:31:01.292440 | LOOP [fetch-output : Ensure local output dirs] 2026-01-03 04:31:01.799291 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/c5d31f13fb3e48e7b669edbaeaa9591b/work/logs" 2026-01-03 04:31:02.105875 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/c5d31f13fb3e48e7b669edbaeaa9591b/work/artifacts" 2026-01-03 04:31:02.385381 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/c5d31f13fb3e48e7b669edbaeaa9591b/work/docs" 2026-01-03 04:31:02.406604 | 2026-01-03 04:31:02.406817 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-01-03 04:31:03.357095 | orchestrator | changed: .d..t...... ./ 2026-01-03 04:31:03.357453 | orchestrator | changed: All items complete 2026-01-03 04:31:03.357517 | 2026-01-03 04:31:04.196888 | orchestrator | changed: .d..t...... ./ 2026-01-03 04:31:04.912133 | orchestrator | changed: .d..t...... ./ 2026-01-03 04:31:04.935339 | 2026-01-03 04:31:04.935492 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-01-03 04:31:04.976031 | orchestrator | skipping: Conditional result was False 2026-01-03 04:31:04.985201 | orchestrator | skipping: Conditional result was False 2026-01-03 04:31:05.006535 | 2026-01-03 04:31:05.006690 | PLAY RECAP 2026-01-03 04:31:05.006771 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-01-03 04:31:05.006816 | 2026-01-03 04:31:05.145004 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-03 04:31:05.147543 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-03 04:31:05.924250 | 2026-01-03 04:31:05.924496 | PLAY [Base post] 2026-01-03 04:31:05.940708 | 2026-01-03 04:31:05.940871 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-01-03 04:31:06.999639 | orchestrator | changed 2026-01-03 04:31:07.008458 | 2026-01-03 04:31:07.008602 | PLAY RECAP 2026-01-03 04:31:07.008687 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-01-03 04:31:07.008753 | 2026-01-03 04:31:07.153302 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-03 04:31:07.154611 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-01-03 04:31:08.055015 | 2026-01-03 04:31:08.055262 | PLAY [Base post-logs] 2026-01-03 04:31:08.067798 | 2026-01-03 04:31:08.067968 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-01-03 04:31:08.531426 | localhost | changed 2026-01-03 04:31:08.551574 | 2026-01-03 04:31:08.551895 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-01-03 04:31:08.579407 | localhost | ok 2026-01-03 04:31:08.585072 | 2026-01-03 04:31:08.585199 | TASK [Set zuul-log-path fact] 2026-01-03 04:31:08.604150 | localhost | ok 2026-01-03 04:31:08.618749 | 2026-01-03 04:31:08.618900 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-03 04:31:08.644357 | localhost | ok 2026-01-03 04:31:08.647925 | 2026-01-03 04:31:08.648039 | TASK [upload-logs : Create log directories] 2026-01-03 04:31:09.145334 | localhost | changed 2026-01-03 04:31:09.149419 | 2026-01-03 04:31:09.149565 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-01-03 04:31:09.684030 | localhost -> localhost | ok: Runtime: 0:00:00.007307 2026-01-03 04:31:09.688199 | 2026-01-03 04:31:09.688320 | TASK [upload-logs : Upload logs to log server] 2026-01-03 04:31:10.334765 | localhost | Output suppressed because no_log was given 2026-01-03 04:31:10.336979 | 2026-01-03 04:31:10.337105 | LOOP [upload-logs : Compress console log and json output] 2026-01-03 04:31:10.413168 | localhost | skipping: Conditional result was False 2026-01-03 04:31:10.427522 | localhost | skipping: Conditional result was False 2026-01-03 04:31:10.441219 | 2026-01-03 04:31:10.441364 | LOOP [upload-logs : Upload compressed console log and json output] 2026-01-03 04:31:10.493550 | localhost | skipping: Conditional result was False 2026-01-03 04:31:10.493888 | 2026-01-03 04:31:10.499477 | localhost | skipping: Conditional result was False 2026-01-03 04:31:10.510019 | 2026-01-03 04:31:10.510159 | LOOP [upload-logs : Upload console log and json output]